An old Unix trick for saving databases

October 19, 2012

Suppose that you have a program with a significant in-memory database, or just in-memory state of some sort. You need to save or checkpoint the database every so often, but you have a big enough database that the program will pause visibly while you write everything out. In the modern era, people might reach for threads and start spraying locking over their in-memory data structures and so on. But in the old days you didn't have threads so you couldn't do it the hard way; instead, you had to do it the easy way.

The traditional easy Unix way of handling this is to just fork(). The parent process is the main process; it can continue on as normal, serving people and answering connections and so on. The child process takes as long as it needs to in order to write out the database and then quietly exits. The parent and child don't have to do any new locking of the database, since they each have their own logical copy.

(Because the database-saving child has to be deliberately created by the main process, the main process can generally guarantee that the database is in a consistent state before it fork()'s.)

This approach has the great advantage that it's generally dirt simple to implement (and then relatively bombproof in operation). You probably have a routine to save the database when the program shuts down, so a basic version is to fork, call this routine, and then exit. However it has at least two disadvantages, one semi-recent and one that's always been there right from the start.

The semi-modern problem is that fork() generally doesn't play well with things like threading and various other forms of asynchronous activity. This wasn't a problem in the era that this trick dates from, because those things didn't yet really exist, but it may complicate trying to add this trick to a modern program. The always-present problem is that doing this with a big in-memory database and thus a big process has always given the virtual memory accounting system a little heart attack because you are, in theory, doubling the memory usage of a large program. As a result it's one of the hardest uses of fork() for strict overcommit, and it's not amenable to going away with an API change.

(After all, snapshotting the memory is the entire point of this trick.)

Written on 19 October 2012.
« A danger of default values for function arguments (in illustrated form)
The issue with measuring disk performance through streaming IO »

Page tools: View Source, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Fri Oct 19 01:40:08 2012
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.