Why I'm not interested in rolling back to snapshots of Linux root filesystems

January 2, 2022

One perpetual appeal of about advanced filesystems like btrfs and ZFS is the idea of making a snapshot of your root filesystem, trying an upgrade, and then reverting to the snapshot if you feel that things have gone wrong. In my entry yesterday on why I use ext4 for my root filesystems, I mentioned that I didn't expect doing this to work as well as you'd like, and Aristotle Pagaltzis expressed interest in an elaboration of this. Well, never let it be said that I don't take requests.

(I covered some of the general ground in an old entry on rollbacks versus downgrades, but today I'll be more specific.)

The first problem is that Linux doesn't separate out the different types of things that are in /var; it contains a mess of program data, user data, and logs. You must roll back anything containing program data along with /usr, because your upgrade may have done things like changed the database format or updated your package database. But this will lose the new log data in /var/log (and perhaps elsewhere) and perhaps user data in /var/mail and anywhere else it may be lurking.

(For example, you might have mail flowing through your system under /var/spool. If you sent an email message but it hasn't been fully delivered yet, you don't really want it to vanish in a rollback.)

Then there is the problem of /etc, which contains a mixture of manually maintained files, manually updated package files, automatically maintained state files, and automatically updated package files. Much like /var, you must roll back /etc along with /usr and that will cost you anything you've done by hand since the upgrade, or any state updates for things that live outside of the root filesystem.

(In some environments, state files are potentially significant. For example, ZFS normally maintains state information about your active pools in /etc/zfs/zpool.cache.)

On Linux, rolling back the roof filesystem basically requires a reboot, making it a relatively high impact operation on top of everything else. Some of this is simply the general problem that running programs will no longer have the right versions of shared libraries, configuration files, databases, and so on in the filesystem. Some of this is because the Linux kernel contains internal data structures for active files (and known files more generally) that it doesn't entirely expect to be yanked out from underneath it.

These problems are all magnified if you don't notice problems right away, and if you make routine use of the system before noticing problems. The longer the post-snapshot system is running in normal use, the more user data, changes, and running programs you will have accumulated. The more things that have accumulated, the more disruptive any rollback will be.

Given that you're balancing the disruption, loss, and risks of a rollback against the disruption, loss, and risks of whatever is wrong after the upgrade, it may not take too long before the second option is less disruptive. A related issue is that if you can solve your problems by reinstall back to an older version of one or more packages, it's basically guaranteed to be less disruptive than a root filesystem rollback. This means that root filesystem rollbacks are only worth while in situations with a lot of changes all at once that you can't feasibly roll back, like distribution version upgrades. These are the situations where maintaining a snapshot takes the most amount of disk space, since so much changes.

(In addition, pragmatically things don't go majorly wrong with major upgrades all that often, especially if you wait a while before doing them to let other people discover the significant issues. And remember to read the upgrade notes.)

A very carefully managed system can avoid all of these problems. If you've move all user data into a separate filesystem, change the system through automation (also stored in a separate filesystem), push logs to a separate area, do significant testing after an upgrade before putting things in production, and can reboot with low impact, rollbacks could work great. But this is not very much like typical Linux desktop systems; it's more like a "cattle" style cloud server. Very little in a typical Fedora, Debian, or Ubuntu system will help you naturally manage it this way.

(There are other situations where rollbacks are potentially useful. For example, if you have a test system with no user data, no important logs, and no particular manually maintained changes that you frequently try out chancy updates on. If everything on the system is basically expendable anyway, reverting to a snapshot is potentially the fastest way to return it to service.)

Sidebar: Snapshots by themselves can be valuable

A snapshot by itself provides you an immediate, readily accessible backup of the state of things before you made the change. This can be useful and sometimes quite valuable in ordinary package updates. For example, if you upgrade a package and the package updates the format of its database in /var so that you can't revert back to an older package version, a snapshot potentially lets you fish out the old pre-upgrade database and drop it into place to go with the older package version.


Comments on this page:

Very little in a typical Fedora, Debian, or Ubuntu system will help you naturally manage it this way.

Ubuntu is working on the zsys system for helping with this:

Solaris also has 'boot environments' (BE), an idea which FreeBSD copied:

beadm excludes a bunch of paths from the BE; by default /tmp, stuff in /var, etc. What's interesting is that you can create a BE, update the BE from the currently-running untouched system, and then reboot into the BE. For simple package/security updates of user land programs this may be excessive, but if the update includes a kernel upgrade and you have to reboot anyway, then it could be useful.

It may be that 'root rollbacks' are not worth it for you currently because of all the effort that's needed to get them to work with all the concerns you list. Having things a few commands away (zsysctl, beadm) may change the balance. Though for the current utilities to work, you have to be running root-on-ZFS, which is a whole other discussion about various trade-offs.

By remyabel at 2022-01-02 08:34:35:

You should probably clarify that you are talking about ext4 snapshots which do take up significant disk space, btrfs snapshots for example do not. There is also silverblue which uses ostree. It's a much better system because rather than being a snapshot of the entire system, it is a transaction based system that's abstracted to the entire OS instead of just the package manager.

With that being said, I still don't use snapshots even for btrfs, because you still need a backup system anyway. Snapshots supposedly provide data consistency which is true to an extent; if there's running programs during the snapshot, there can still be inconsistency.

Thanks for expanding.

What if you also made another snapshot right after the upgrade, so you’d have a reference for user data changes that happened since?

If – and I know this is a tall order – you could usefully diff against that, you could reapply those changes on top of the pre-upgrade snapshot and then be on your way, yes?

(Of course I’ve applied patches before, so I’m well aware that you might have to deal with what would basically be merge conflicts… So I guess the answer is that this would be how things would need to work, conceptually, but it’s not practical now nor maybe ever (depending on how likely conflicts would be and how painful). (So I guess the answer might be that yes, this is how it would have to work – and because that’s not feasible, neither is the use of snapshots.) Or is there a snag even conceptually?)

By cks at 2022-01-02 14:07:17:

Even efficient reference-based, copy on write snapshot systems like those that ZFS and btrfs have (as I understand btrfs) take space proportional to the amount of changes between the snapshot and the current state of life. This is a fundamental requirement; the snapshot must be able to provide the old state, while the current filesystem must be able to provide the new state. This implies that if the old state and the new state change significantly, you will need a large amount of disk space, approaching the sum of the snapshot and the live filesystem. One case this is likely to happen is if you did a distribution version upgrade that replaced almost everything with binaries, shared libraries, and so on that are either outright new versions or at least least have been rebuilt with new compilers that shuffled and changed contents.

Written on 02 January 2022.
« Why I (still) use ext4 for my Linux root filesystems
Why "process substitution" is a late feature in Unix shells »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Sun Jan 2 00:02:40 2022
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.