== One view on practical blockers for IPv6 adoption I recently wound up reading Russ White's [[Engineering Lessons, IPv6 Edition http://ntwrk.guru/engineering-lessons-ipv6/]] ([[via https://twitter.com/amyengineer/status/635879230853648384]]), which is yet another meditation by a network engineer about why people haven't exactly been adopting IPv6 at a rapid pace. Near the end, I ran across the following: > For those who weren't in the industry those many years ago, there > were several drivers behind IPv6 beyond just the need for more address > space. [...] > > Part of the reason it's taken so long to deploy IPv6, I think, is > because it's not just about expanding the address space. IPv6, for > various reasons, has tried to address every potential failing ever > found in IPv4. As a sysadmin, my reaction to this is roughly 'oh god yes'. One of the major pain points in adding IPv6 (never mind moving to it) is that so much has to be changed and modified and (re)learned. IPv6 is not just another network address for our servers (and another set of routes); it comes with a whole new collection of services and operational issues and new ways of operating our networks. There are a whole host of uncertainties, from address assignment (both static and dynamic) on upwards. Given that [[right now IPv6 is merely nice to have ../tech/IPv6NiceVersusBeneficial]], you can guess what this does to IPv6's priority around here. Many of these new things exist primarily because the IPv6 people decided to solve all of their problems with IPv4 at once. I think there's an argument that this was always likely to be a mistake, but beyond that it's certainly made everyone's life more complicated. I don't know for sure that IPv6 adoption would be further along if IPv6 was mostly some enlarged address fields, but I rather suspect that it would be. Certainly I would be happier to be experimenting with it if that was the case. What I can boil this down to is the unsurprisingly news that large scale, large scope changes are *hard*. They require a lot of work and time, they are difficult for many people to test, and they are unusually risky if something goes wrong. And in a world of [[fragile complexity FragileComplexity]], their complexity and complex interactions with your existing environment are not exactly confidence boosters. There are a lot of dark and surprising corners where nasty things may be waiting for you. Why go there until you absolutely have to? (All of this applies to existing IPv4 environments. If you're building something up from scratch, well, going dual stack from the start strikes me as a reasonably good idea even if you're probably going to wind up moving slower than you might otherwise. But green field network development is not the environment I live in; [[it's rather the reverse MachineRoomArchaeology]].)