Is a proper transition to IPv6 actually impossible?

I don’t see a viable path from our current situation to a natural inevitability where IPv6 can be thought of as properly adopted: where deploying or signing up IPv6-only clients or services would be a sane and realistic thing to do.

In our current situation on the public Internet, IPv4 support is mandatory, but IPv6 is optional. Servers have to support IPv4 because without it, IPv4-only clients would not reach them. They don’t have to support IPv6 because IPv6-only clients incapable of reaching the IPv4 Internet are not a phenomenon yet. Similarly, clients have to support IPv4 (or emulate it), because without it, IPv4-only servers would be unreachable. They don’t have to support IPv6 because IPv6-only servers incapable of being reached by the IPv4 Internet are not a thing yet. These constraints are logically self-enforcing: clients will have to continue to support IPv4 because servers do and vice versa.

How do we get from this situation to one where IPv6 reachability can be taken as a given for all servers and clients on the public Internet? At the moment, the incentive to set up IPv6 isn’t there yet, apart from a warm feeling administrators may feel that they are preparing for something, we are assured, is going to come in handy in the future. The death of the IPv4 address space has been greatly exaggerated, or rather, we’ve kept it on life support by hacking in a number of workarounds such as NAT and re-allocating smaller and smaller subnets, departing from our earlier notions of class based allocation. We’re constantly warned that the gig is very much up: any moment now, the last IPv4 address will be handed out, and from then on, we’ll be thankful for having set up our IPv6 support ahead of time because we’re going to need it right away.

But are we really going to need IPv6 right away? Is it even going to be possible to use it right away, or ever, despite IPv4 address depletion?

Can we implement IPv6 support now, so that it’s “ready to go” once we need it? A lot of ISPs, software/hardware vendors and server administrators are working on this right now, despite the lack of immediate benefit, because it’s the “right thing”. DNS servers, web servers, and client ISP connections supporting IPv6 are popping up all over the place: you can even say there are lots of them. Not the whole Internet, but enough of them to represent a significant effort and expenditure. The trouble with this is, it’s never going to be complete. Only a practical 100% deployment is going to satisfy the pre-requisite for any sane ISP or service provider to provide an IPv6-only service. Let’s say 90% of servers and clients are super-responsible and do the “right thing”.  We will still have to rely on IPv4 everywhere lest people be stuck on a subset of the Internet where not quite everything is reachable. Nobody would knowingly sign up to a service that limits access to 10% of the Internet, so nobody is going to offer it, and there will be no more incentive for administrators and ISPs to continue rolling out than there already is.

Perhaps, then, will we be forced, by a “big crisis” in which the world suddenly has to scramble to implement IPv6 all-at-once in a massive panic bigger than Y2K? Let’s assume the depletion of IPv4 addresses happens, and it’s sudden. Panic stations! What do we do? The problem is, we can’t just switch to IPv6 only: that would still cut off new clients from large sections of the Internet, or cut off new servers from large numbers of clients. A big crisis is going to bring us no quicker out of the stalemate of there being no incentive to implement IPv6 until everybody else does, and any one group brute-forcing people into going IPv6-only before the rest of the Internet is ready will just be inflicting unnecessary pain. So we scramble to find other solutions. More stop-gaps. We find other IPs and we NAT huge numbers of clients or servers behind them. We’ve already started: NAT64 is a sophisticated way of giving IPv6-only services the ability to talk to the IPv4 Internet. The trouble with this is, these solutions which are intended as stop-gap solutions end up becoming the new Internet: the new way people do things. IPv6 is well and good, but we need the stop-gap solutions for the real Internet, which continues to need people to be able to talk IPv4. We end up with a situation where IPv4 addresses have depleted and we are still no closer to being able to sign up or deploy true IPv6-only clients or services, still thinking of our current situation or crazy NAT and other technologies as a temporary transitional period until people get IPv6 implemented. And a segment of the Internet still continues to ignore IPv6 because they still don’t need to.

And I’m reminded of another failed standard that everybody thought was going to be the next big thing: XHTML. Specifically, a full transition to XHTML on the public web – where we didn’t have to make it HTML-compatible and serve it as text/html – never happened, because support for it never eventuated, and support for it never eventuated because nobody stopped making it HTML-compatible because it wouldn’t have been supported. Oh, how we hoped that, for example, Microsoft would just implement it because it was a “good thing” to do. But they felt no incentive to do so, understandably enough.

In that case, smart engineers and innovators realised the mistake of trying to bootstrap everybody onto a non-backwards-compatible standard and started work on something saner – an extension to HTML that was fully backwards-compatible, requiring no “leap of faith” transitional moment. That new standard was HTML5. And the world adopted it enthusiastically, never needing to worry about adopting a new standard before the world at large could actually benefit from it.

Is IPv6 going to suffer a similar collapse? If not, why not? How is it actually going to be possible to let go of IPv4 and all IPv4-supporting transition mechanisms and reach the point where we can deploy IPv6-only hosts on the public Internet? I’m not a qualified network engineer but I do sense there may be a gap in strategy here.

PHP: recursion causes segmentation fault

The fact that attempting recursion in PHP leaves you at the mercy of segmentation faults isn’t anything new, really (though it is new to me now).  Since 1999, bug reports to this effect have been knocked back, stating that this is the intended behaviour.

This is all due to technical limitation inside PHP, which I guess is reasonable.  PHP pushes to the stack each time you enter a new function call, which is quick and handy, but limits your recusion depth.

What annoys me is that it’s one more reason that PHP is not very good for creating portable code.  There’s no warning or PHP error before you exhaust the recursion limit, PHP just crashes (bringing down an Apache process with it if you’re using mod_php).  The “safe” depth of recursion you could do on PHP is hard to predict, and is going to vary between installations and builds.  I’ve seen posts online showing people doing experiments to try and figure out PHP’s recursion limit, but there’s too many factors at work – how PHP’s been built, whether it’s 32-bit or 64-bit, the type and quantity of variables you are passing as arguments or return values, and so on.  In some code of my own, I’ve seen php-cgi crashing after just 20 to 30 levels of recursion.  The function was passing some rather large arrays as return values.

You could argue that the same issue is going to happen in a C program, and that any decent programmer should be ready to code around this, anyway.  However, in my mind PHP should be a level of abstraction higher than this sort of detail; a programmer in such a high level scripting language should not need to worry about the underlying low-level implementation.  Ideally, PHP ought to put out a PHP error, allowing for the problem to be reported gracefully just as if you’d tried to call a non-existant function, and the PHP interpreter should end the script gracefully without crashing any processes.

The bottom line is that if you’re doing recursion in PHP, you should use your own method to limit the depth to a very conservative level, or perhaps rewrite it to use a big loop instead of recursion.  If, like me, you’re writing a recursive descent parser, if you need to make your code portable you may be better off to rewrite it to use a big loop.

Stable vs stable: what ‘stable’ means in software

I’ve come to learn that when someone refers to software as ‘stable’, there is more than one quite different thing they might mean.

A stable software release

A stable software release is so named because it is unchanging.  Its behaviour, functionality, specification or API is considered ‘final’ for that version.  Apart from security patches and bug fixes, the software will not change for as long as that version of the software is supported, usually from 1 to many years.

Software that is intended for the public to use is usually “stable”.  It is released, and following the release no new features are added apart from the odd bug fix.  To get new functionality users eventually need to upgrade to the next version.  Any problems with the software (unless they can easily be fixed with a bug fix update) are “known” problems, and the software vendor does not need to keep track of more than one variantion of these problems for any given version.

Examples of releases that are the opposite of stable include development snapshots, beta releases, and rolling releases.  A characteristic of all three of these is that they are in a frequent state of change; even their functionality and feature list can change from week to week, or day to day. You cannot depend on them to behave the same way from one week to the next.

Some people like that with non-stable releases such as development snapshots, beta releases or rolling releases, they are always getting the latest features as soon as they are written into the software.  In many cases, these releases also fix deficiencies or bugs that would otherwise remain stagnant in the stable release.  However, with no stability in the feature list or functionality, this affects the ability for documentation, other software that interfaces with the software, plugins, and more to function: a change in the software can mean these become out of date or fail to work anymore.  When you have software which needs to work well with a lot of other software, having a stable release reduces the frequency with which changes in the software will break compatibility with the other software relying on it.

Another meaning of stable

Another meaning of stable exists in common use, where people take it to mean “working reliably” or “solid”.  That is, people refer to software that runs consistently without crashing as stable.  You can see why they may use the word in this way: in real life, when something can be described as stable, it won’t fall over.  If a chair is stable enough, you can sit in it and it won’t topple or collapse.

However, confusion arises when people use this form of the word stable to refer to software that isn’t stable in the earlier sense.  For example, it’s why you see comments like “I’ve been using the beta version since February and it is very stable” or “the newer version is more stable”.  The point that these comments make is not that the software is final and unchanging, as in a stable software release, but more that the software is stable like a chair might be stable.  It seems reliable, and the user hasn’t experience any major problems.

This kind of stability won’t help developers extending the software with other software, or writing plugins or customisations for the software, since the fact that at any given time the software is running well does not make up for the fact that the software is subject to frequent changes.

Commenters or reviewers who describe beta or rolling releases of software as stable, might want to try describing them as “solid” or “reliable” instead, to save confusion with a stable release which is an unchanging release.  Or, the fact that the same term is understood in two different and sometimes conflicting ways may indicate that the term is not an ideal one in the first place.  It does, however, seem firmly entrenched in the software development world, where the meaning of a stable release is well known.

Distributed Version Control Systems are not Voodoo

Distributed version control is nothing to be scared of.  It may be a lot simpler than a lot of people or ‘introductory tutorials’ have led you to believe.

  • Distributed version control doesn’t force you into a new way of working.  If you prefer a centralised approach, you can still do that.  You could use it very much like you currently use svn if you want to, and yet you’ll still get the benefit of having a local history of revisions, allowing you to do a lot of things such as revert, diff, or even check in some code locally – without the overhead of network access.
  • Distributed version control does not force you to have a mess of different branches of your code on different developers’ machines, with no clear indication as to which one is most ‘up to date’.  You would still have a ‘main’ or ‘trunk’ branch on a server somewhere which represents the main focus of development.  Developers get the added option of having new branches on their own machine, but this doesn’t mean that they can’t send that code back to some shared trunk when they finish something.
  • The benefits to distributed version control do not end at the ‘coding while on a plane’ scenario.  While this is indeed a benefit of distributed version control (while coding without network access you still have access to the complete revision history and can do commits), this is not the sole reason behind DVCS.  You also get the benefit of reduced network overhead, making typical operations much faster, and you get a lot more flexibility in choosing a suitable workflow, and altering that workflow when it suits you.  If you are working on your own pet feature but don’t want to check in your code to the main trunk until it’s done, you can still easily use revision control on your own little copy, and when it does come time to check in and merge all your changes to the trunk all the little revisions you made when you were working on it separately are preserved.  This requires no setting up on the central server – any developer can create their own little parallel branch for a while, then merge it back up – it can be as simple as doing a ‘local commit’ instead of a ‘commit’ or ‘check-in’ (for me, that’s just one checkbox).  Merging may sound complicated, but it’s not – it simply works.  The system is smart enough to figure out which file was which and you can’t really break it – anything is reversible.
  • Not all DVCS are git, or particularly like git.  Git originated as a special-purpose version control system which Linus Torvalds developed to support his own personal workflow, managing the Linux kernel.  While it now has pretty much all the features of a full-featured version control system, it still has its origins as a tool built especially for one person’s own workflow.  Undoubtedly it is a good system, but if you try git and don’t like it, you should not assume that other distributed version control systems are the same as it.  Git also has problems on Windows (a native Msys version is still being developed as of this blog post).

My version control system of choice at the moment is Bazaar, chosen because I need both Windows and Linux capability and I like that it is easy to use.  There is not much separating it from Mercurial except for some small, almost philosophical conventions.  Just as one almost insignificant example, Bazaar versions directories, allowing empty directories to be significant.  I’d recommend either.  Bazaar has the massive support of Canonical (of Ubuntu fame) and big projects such as MySQL.  Mercurial has big projects such as Mozilla (of Firefox fame).  You can get near instant help for either by going to their respective freenode IRC channels or by asking a question on Stack Overflow.

Three ways to work with XML in PHP

‘Some people, when confronted with a problem, think “I know, I’ll use XML.”  Now they have two problems.’
– stolen from somewhere

  • DOM is a standard, language-independent API for heirarchical data such as XML which has been standardized by the W3C. It is a rich API with much functionality. It is object based, in that each node is an object.DOM is good when you not only want to read, or write, but you want to do a lot of manipulation of nodes an existing document, such as inserting nodes between others, changing the structure, etc.
  • SimpleXML is a PHP-specific API which is also object-based but is intended to be a lot less ‘terse’ than the DOM: simple tasks such as finding the value of a node or finding its child elements take a lot less code. Its API is not as rich than DOM, but it still includes features such as XPath lookups, and a basic ability to work with multiple-namespace documents. And, importantly, it still preserves all features of your document such as XML CDATA sections and comments, even though it doesn’t include functions to manipulate them.
    SimpleXML is very good for read-only: if all you want to do is read the XML document and convert it to another form, then it’ll save you a lot of code. It’s also fairly good when you want to generate a document, or do basic manipulations such as adding or changing child elements or attributes, but it can become complicated (but not impossible) to do a lot of manipulation of existing documents. It’s not easy, for example, to add a child element in between two others; addChild only inserts after other elements. SimpleXML also cannot do XSLT transformations. It doesn’t have things like ‘getElementsByTagName’ or getElementById’, but if you know XPath you can still do that kind of thing with SimpleXML.
    The SimpleXMLElement object is somewhat ‘magical’. The properties it exposes if you var_dump/print_r/var_export don’t correspond to its complete internal representation, and end up making SimpleXML look more simplistic than it really is. It exposes some of its child elements as if they were properties which can be accessed with the -> operator, but still preserves the full document internally, and you can do things like access a child element whose name is a reserved word with the [] operator as if it was an associative array.

You don’t have to fully commit to one or the other, because PHP implements the functions:

This is helpful if you are using SimpleXML and need to work with code that expects a DOM node or vice versa.

PHP also offers a third XML library:

  • XML Parser (an implementation of SAX, a language-independent interface, but not referred to by that name in the manual) is a much lower level library, which serves quite a different purpose. It doesn’t build objects for you. It basically just makes it easier to write your own XML parser, because it does the job of advancing to the next token, and finding out the type of token, such as what tag name is and whether it’s an opening or closing tag, for you. Then you have to write callbacks that should be run each time a token is encountered. All tasks such as representing the document as objects/arrays in a tree, manipulating the document, etc will need to be implemented separately, because all you can do with the XML parser is write a low level parser.
    The XML Parser functions are still quite helpful if you have specific memory or speed requirements. With it, it is possible to write a parser that can parse a very long XML document without holding all of its contents in memory at once. Also, if you not interested in all of the data, and don’t need or want it to be put into a tree or set of PHP objects, then it can be quicker. For example, if you want to scan through an XHTML document and find all the links, and you don’t care about structure.