2021-03-22
Portability has ongoing costs for code that's changing
I recently had a hot take on Twitter:
Hot take: no evolving project should take patches for portability to an architecture that it doesn't have developers using, or at least a full suite of tests and automated CI on. Especially if the architecture is different from all supported ones in some way (eg, big-endian).
Without regular usage or automated testing, it's far too likely that the project will break support on the weird architecture. And keeping the code up to date for the weird architecture is an ongoing tax on other development.
Some proponents of alternate architectures like to maintain that portability is free, or at least a one time cost (that can be paid by outside contributors in the form of a good patch to 'add support' for something). It would be nice if our programming languages, habits, and techniques made that so, but they don't. The reality is that maintaining portability to alternate environments is an ongoing cost.
(This should not be a surprise, because all code has costs and by extension more code has more costs.)
To start with, we can extend the general rule that 'if you don't test it, it doesn't work'. If alternate environments (including architectures) aren't tested by the project and probably used by developers, they will be broken sooner or later. Not because people are necessarily breaking them deliberately, but because people overlook things and never have the full picture. This happens even to well intentioned projects that fully want to support something that they don't test, as shown by Go's self-tests. Believing that an alternate architecture will work and keep working without regular testing goes against everything we've learned about the need for regular automated testing.
(I believe that for social reasons it's almost never good enough to just have automated tests without developers who are using the architecture and are committed to it.)
Beyond the need for extra testing and so on, portability is an ongoing cost for code changes. The alternate architecture's differences and limits will need to be considered during programming, probably complicating the change, then some changes will be broken only on the alternate architecture and will require more work to fix. Sooner or later some desired changes won't be possible or feasible on the alternate architecture, or will require extra work to implement options, alternatives, or additional things. In some cases it will require extreme measures to meet the project's quality of implementation standards for changes and new features. When something slips through the cracks anyway, the project will have additional bug reports, ones that will be difficult to deal with unless the project has ongoing easy access both to the alternate environment and to experts in it.
More broadly, the alternate architecture is implicitly yet another thing that programmers working on the project have to keep in mind. The human mind has a limited capacity for this; by forcing people to remember one thing, you make it so that they can't remember other things.
The very fact that an alternate architecture actually needs changes now, instead of just building and working, shows that it will most likely need more changes in the future. And the need for those changes did not arise from nothing. Programmers mostly don't deliberately write 'unportable' code to be perverse. All of that code that needs changes for your alternate architecture is the natural outcome of our programming environment. That code is the easiest, most natural, and fastest way for the project to work. The project has optimized what matters to it, although not necessarily actively and consciously.
(My tweets and this entry were sparked by portions of this, via. The whole issue has come up in the open source world before, but this time I felt like writing my own version of this rejoinder.)