Wandering Thoughts archives

2020-11-23

What containers do and don't help you with

In a comment on my entry on when to use upstream versions of software, Albert suggested that containers can be used to solve the problems of using upstream versions and when you have to do this anyway:

A lot of those issues become non-issues if you run the apps in containers (for example Grafana).

Unfortunately this is not the case, because of what containers do and don't help you with.

What containers do is that they isolate the host and the container from each other and make the connection between them simple, legible, and generic. The practical Unix API is very big and allows software to become quite entangled in the operating system and therefor dependent on specific things in unclear ways. Containers turn this into a narrow interface between the software and the host OS and make it explicit (a container has to say clearly at least part of what it wants from the host, such as what ports it wants connected). Containers have also created a social agreement that if you violate the container API, what happens next is your own fault. For example, there is usually nothing stopping you from trying to store persistent data within your theoretically ephemeral container, but if you do it and your container is restarted and you lose all the data, you get blamed, not the host operators.

However, containers do not isolate software from itself and from its own flaws and issues. When you put software in a container, you still have to worry about choosing and building the right version of the software, keeping it secure and bug free, and whether or not to update it (and when). Putting Exim 4.93 in a container doesn't make it any better to use than if you didn't have it in a container. Putting Grafana or Prometheus Pushgateway in a container doesn't make it any easier to manage their upgrades, at least by itself. It can be that the difficulties of doing some things in a container setup drive you to solve problems in a different way, but putting software in a container doesn't generally give it any new features so you could always have solved your problems in those different ways. Containers just gave you a push to change your other practices (or forced you to).

Containers do make it easier to deal with software in one respect, which is that they make it easier to select and change where you get software from. If someone, somewhere, is doing a good job of curating the software, you can probably take advantage of their work. Of course this is just adding a level of indirection; instead of figuring out what version of the software you want to use (and then keeping track of it), you have to figure out which curator you want to follow and keep up with whether they're doing a good job. The more curators and sources you use, the more work this will be.

(Containers also make it easier and less obvious to neglect or outright abandon software while still leaving it running. Partly this is because containers are deliberately opaque to limit the API and to create isolation. This does not magically cure the problems of doing so, it just sweeps them under the rug until things really explode.)

ContainersWhatHelpAndNot written at 22:44:05; Add Comment

2020-11-19

Apple Silicon Macs versus ARM PCs

In a comment on my entry on how I don't expect to have an ARM-based PC any time soon, Jonathan said:

My big takeaway from the latest release of Apple laptops is that these new laptops aren't necessarily ARM laptops. [...]

When a person gets an Apple Silicon Mac, they are not getting an ARM computer. They are getting an Apple computer.

As it happens, I mostly agree with this view of the new Apple machines (and it got some good responses on tildes.net). These Apple Silicon Macs are ARM PCs in that they are general purpose computers (as much as any other OS X macOS machine) and that they use the ARM instruction set. But they are not 'ARM PCs' in two other ways. First, they're not machines that will run any OS you want or even very many OSes. The odds are pretty good that they're not going to be running anything other than OS X macOS any time soon (see Matthew Garrett).

Part of that is because these machines use a custom set of hardware around their ARM CPU and Apple has no particular reason to document that hardware so that anyone else can talk to it. In the x86 PC world, hardware and BIOS documentation exists (to the extent that it does) and standards exist (to the extent that they do) because there are a bunch of independent parties all involved in putting machines together, so they need to talk to each other and work with each other. There is nothing like that in Apple Silicon Macs; Apple is fully integrated from close to the ground up. The only reason Apple has for using standards is if they make Apple's life easier.

(Thus, I suspect that there is PCIe somewhere in those Macs.)

Second, they don't use standard hardware components and interfaces. This isn't just an issue of being able to change pieces out when they break or when they don't fit your needs (or when you want to improve the machine without replacing it entirely). It also means that work to support Apple Silicon Macs doesn't help any other hypothetical ARM PC, and vice versa. To really have 'ARM PCs' in the way that there are 'x86 PCs', you need standards, and to get those standards you probably need component based systems. If everyone is making bespoke SoC machines, you have to pray that they find higher level standards compelling, and those standards are useful enough.

(Even laptop x86 PCs are strongly component based, although often those components are soldered in place. This is one reason why Linux and other free OSes often mostly or entirely just works on new laptop models.)

PS: My feeling is that there is no single 'desktop market' where we can say that it does or doesn't want machines with components that it can swap or mix and match. There is certainly a market segment that demands that, and a larger one that wants at least the lesser version of adding RAM, replacing the GPU, and swapping and adding disks. But there is also a corporate desktop market where they buy little boxes in a specific configuration and never open them, and I suspect it has a bigger dollar volume.

ApplePCVsARMPC written at 01:21:38; Add Comment

2020-11-15

I don't expect to have an ARM-based PC any time soon

The ARM-related news of the time interval is of course that Apple has announced several ARM-based Mac laptops and that early (and preliminary) leaks are that they perform very well and are definitely competitive. This raises the great hope in many technical people's minds of ARM based laptops and desktops from more than Apple, ARM PCs that would run more than Apple's OS X macOS. This is especially a dream for many Unix people, who already run little or no commercial binary software and so in theory ARM support is only a recompile away; moving away from the x86 hegemony would please a lot of people.

(The practice of moving to ARM is somewhat different, especially for performance sensitive code. Focused use of x86 assembly is surprisingly common in various places.)

My own view is that while I wouldn't mind using a competitive ARM based PC (either laptop or desktop), I don't expect to actually have one any time soon. The rub is that 'competitive'. Right now, the only ARM chip maker that can compete with the performance of the x86 hegemony in the desktop/laptop space is Apple, and Apple's machines only run OS X macOS. While there are promising developments in the server space, many of them are also bespoke to the large cloud vendors, and also may not be particularly suitable for scaling down to desktops and laptops (in all sorts of dimensions, including how much money can be made on them).

Apple has demonstrated that it's possible to make ARM be competitive (on laptops, at least, although likely on desktops too), but they stand alone and I don't think it's likely for anyone else to join them. Apple has the benefit of a gigantic market for their own ARM designs that feeds a firehose of money into their R&D budget and also good assurance of large production runs even for laptop CPUs (never mind phone ones). Anyone else would be in the position of trying to take laptop and desktop CPU market share from Intel and AMD (who are already fighting each other), without the kind of money firehose and good market that Apple has.

(Apple also has the benefit of capturing all of the profit from its laptops. An ARM CPU maker would capture much less of the profit of machines with its CPUs in them; much of the overall profit would go to the system vendors or at least to the makers of other parts, like motherboards.)

The other problem is that merely being competitive isn't good enough, because there are real costs to switching to ARM (even for Unix people). It's likely that an ARM PC would need to be clearly better than the x86 equivalent before very many people became interested, and might face an extended period of doubt and proving itself. To be honest, that would be my attitude toward the first generation of ARM PCs; I would not buy immediately and let other people get those experiences.

(I think this will be easier on laptops than on desktops, because on laptops power efficiency can count for a lot. The longer battery lifetime is already one of Apple's selling points for their new ARM laptops)

PS: I would love to be wrong on this, as I'm not particularly fond of the x86 hegemony. But I'm also a pragmatist.

ARMNoPCExpectations written at 00:07:46; Add Comment

2020-11-06

It's possible that the real size of different SSDs is now consistent

Let's start with what I tweeted:

Our '2 TB' SSDs seem to have a remarkably consistent size as reported by Linux, unlike past HDs (3907029168 512-byte sectors). I wonder if this is general or just luck (or if WD and Crucial/Micron are closely connected).

Back in the days, one of the issues that sysadmins faced in redundant storage setups was that different models of hard drives of the same nominal size (such as 2 TB or 750 GB) might not have exactly the same number of sectors. This could cause you heartburn if you had let your storage system use all of the sectors on the drive, and then it failed and you had to replace it with a different model that might be rated as '2 TB' but had a few less sectors than the previous drive. To deal with this in our fileservers, we use a carefully calculated scheme based on using no more than the advertised amount of drive space.

But, well, now that we're using SSDs it's not clear if that's necessary any more. I have convenient access to '2 TB' SSDs from Crucial/Micron and WD, and somewhat to my surprise all of them have exactly the same size in sectors. That all of the different models of Crucial and Micron 2 TB SSDs are exactly the same size is not too surprising, because they're the same company. That WD SSDs are also the same size is a bit surprising; for HDs, I would have expected some differences.

At this point I don't know if this is just a coincidence or if it's generally the case that most or all X-size SSDs from major providers will have the same underlying size. If I was energetic, I would try to see if someone had a (Linux) database of SSD models and their exact reported number of sectors, perhaps gathered as part of getting a database of general SMART information for various drives (the smartmontools website doesn't seem to have any pointers to such a thing).

If this is really the case for SSDs, one of the things that may be going on is that SSDs are made from much more uniform underlying hardware than HDs were, since I believe the actual flash memory chips come in very standard sizes (all powers of two as far as I know). There's no room for tweaking sector or track density on magnetic platters any more; you get what you get (although the amount of space taken by error checking codes may still vary). However, this is probably not the full story.

On the one hand, all SSDs are over-provisioned on flash memory by some amount and you might expect different companies to pick different amounts of over-provisioning. On the other hand, there is very little advantage in consumer drives to having a little bit more extra space than your competitors, because you are still going to round it down to a nice even number for marketing. Possibly everyone is just copying the amount of space that the first person to sell a X-size SSD picked, because there is no reason not to and it makes everyone's life slightly easier.

(The reported size in sectors is also a little bit odd for our 2 TB SSDs; it comes out to about 398 decimal MB extra over and above decimal 2 TB.)

SSDRealSizeQuestion written at 00:37:55; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.