Why installing packages is almost always going to be slow (today)

March 10, 2015

In a comment on my entry on how package installs are what limits our machine install speed, Timmy suggested that there had to be a faster way to do package installs and updates. As it happens, I think our systems can't do much here because of some fundamental limits in how we want package updates to behave, especially ones that are done live.

The basic problem on systems today is that we want package installs and updates to be as close to atomic transactions as possible. If you think about it, there are a lot of things that can go wrong during package install. For example, you can suddenly run out of disk space halfway through; you can have the system crash halfway through; you can be trying to start or run a program from a package that is part way through being installed or updated. We want as many of these to work as possible, and especially we want as few bad things as possible to happen to our systems if something goes wrong part way through a package update. At a minimum we want to be able to roll back a partially applied package install or update if the package system discovers that there's a problem.

(On some systems there's also the issue that you can't overwrite at least some files that are in use, such as executables that are running.)

This implies that we can't just delete all of the existing files for a package (if any), upend a tarball on the disk, and be done with it. Instead we need a much more complicated multi-step operation with writing things to disk, making sure they've been synced to disk, replacing old files with new ones as close to atomically as possible, and then updating the package management system's database. If you're updating multiple packages at once, you also get a tradeoff of how much you aggregate together. If you basically do each package separately you add more disk syncs and disk IO, but if you do all packages at once you may grow both the transient disk space required and the risks if something goes wrong in the middle.

(Existing package management systems tend to be cautious because people are more willing to excuse them being slow than blowing up their systems once in a while.)

To significantly accelerate this process, we need to do less IO and to wait for less IO. If we also want this process to not be drastically more risky, we have no real choice but to also make it much more transactional so that if there are problems at any point before the final (and single) commit point, we haven't done any damage. Unfortunately I don't think there's any way to do this within conventional systems today (and it's disruptive on even somewhat unconventional ones).

By the way, this is an advantage that installing a system from scratch has. Since there's nothing there to start with and the system is not running, you can do things the fast and sloppy way; if they blow up, the official remedy is 'reformat the filesystems and start from scratch again'. This makes package installation much more like unpacking a tarball than it normally is (and it may be little more than that once the dust settles).

(I'm ignoring package postinstall scripts here because in theory that's a tractable problem with some engineering work.)


Comments on this page:

By Ewen McNeill at 2015-03-10 01:09:55:

For first-installs (where if it breaks, you may well just format-and-start-again), possibly using libeatmydata is called for? Amongst other things it makes fsync() a NOP.

This ends up being roughly analogous to ignoring transactions with bulk-loading a database from scratch -- the only meaningful units are "none" and "all", so you might as well not bother keeping track of whether individual parts within that worked or not, or flushing to disk frequently.

To go much faster than that you basically need to end up writing linear data onto the disk (eg, copying a sparse disk image, sparsely). Or perhaps installing into a RAM disk (one of the FAI demos does this!) -- and then writing the whole thing back to more permanent media when you're done. (The latter is an interesting idea now that most servers tend to have more RAM than their OS-disk install size -- I wonder if one of the overlay/union filesystems could help with that?)

Ewen

By cks at 2015-03-10 12:27:42:

Absolutely an install process should avoid (and defeat) disk syncs while it writes to disk; the ideal would be more or less one at the end as everything is finalized. The overall goal is to write data to the disks as fast as the media involved can support with as few stalls and pauses as possible. The same thing is true for maintaining the packaging system's database on the system being installed; you actively want to bulk load it with the final data or defeat its normal attempts to carefully keep itself transactionally consistent. Similarly you can buffer as much in RAM for as long as possible in order to do as many linear writes as possible.

(There is a tradeoff here in that you probably want a good on-disk layout for the final installed system so it helps to eg write shared libraries and programs sequentially on the disk. This may require not deferring disk writeout too long. A near universal switch to SSDs would make this less important, but that's some time in the future.)

By liam at unc edu at 2015-03-11 11:36:36:

What I'd like to see is more work into the updates by:

snapshot, promote, patch snapshot, overlay snapshot to make it live

Written on 10 March 2015.
« I should document my test plans and their results
The irritation of being told 'everyone who cares uses ECC RAM' »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Tue Mar 10 00:06:10 2015
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.