The history of booting Linux with software RAID
One of the broad developments in the Linux kernel's boot process over the past N years has been a steady move from having the kernel do things inside itself to having them done in user level code (which is typically run from an initramfs). The handling of software RAID arrays is no exception to this.
In the beginning, activating software RAID arrays at boot time was
handled inside the kernel. At boot time the kernel (specifically the
software RAID code) scanned all disk partitions of type fd
('Linux
raid') and automatically assembled and activated any software RAID
arrays that it found. Although there were a bunch of corner cases
that this didn't handle, it worked great in most normal situations and
meant that you could boot a 'root on software RAID' system without
an initramfs (well, back then it was an initrd). Since this process
happened entirely in the kernel, the contents of any mdadm.conf
were
irrelevant; all that mattered was that the partitions had the right type
(and that they had valid RAID superblocks). In fact back in the old days
many systems with software RAID had no mdadm.conf
at all.
(I don't remember specific kernel versions any more, but I believe that most or all of the 2.4 kernel series could work this way.)
The first step away from this was to have software RAID arrays assembled
in the initrd environment by explicitly running mdadm
from the
/init
script, using a copy of mdadm.conf
that was embedded in the
initrd image. I believe that the disk partition type no longer mattered
(since mdadm
would normally probe all devices for RAID superblocks). It was possible to have explosive
failures if your mdadm.conf
did not completely
match the state of critical RAID arrays.
(I don't know if this stage would assemble RAID arrays not listed in
your mdadm.conf
and I no longer have any systems I could use to check
this.)
The next state of moving boot time handling of software RAID out of
the kernel is the situation we have today. As I described recently, a modern Linux system does all assembly of
software RAID arrays asynchronously through udev
(along with a great
deal of other device discovery and handling). In order to have all of
this magical udev
device handling happen in the initramfs environment
too, your initramfs starts an instance of udev
quite early on and this
instance is used to process boot-time device events and so on. This
instance uses a subset of the regular rules for processing events,
generally only covering what is considered important devices for booting
your system. As we've seen, this process of assembling software RAID
arrays is generally indifferent to whether or not the arrays are listed
in mdadm.conf
; I believe (but have not tested) that it also doesn't
care about the partition type.
(I think that the udev
process that the initramfs starts is later
terminated and replaced by a udev
process started during real system
boot.)
|
|