== The history of booting Linux with software RAID One of the broad developments in the Linux kernel's boot process over the past N years has been a steady move from having the kernel do things inside itself to having them done in user level code (which is typically run from an initramfs). The handling of software RAID arrays is no exception to this. In the beginning, activating software RAID arrays at boot time was handled inside the kernel. At boot time the kernel (specifically the software RAID code) scanned all disk partitions of type _fd_ ('Linux raid') and automatically assembled and activated any software RAID arrays that it found. Although there were a bunch of corner cases that this didn't handle, it worked great in most normal situations and meant that you could boot a 'root on software RAID' system without an initramfs (well, back then it was an initrd). Since this process happened entirely in the kernel, the contents of any _mdadm.conf_ were irrelevant; all that mattered was that the partitions had the right type (and that they had valid RAID superblocks). In fact back in the old days many systems with software RAID had no _mdadm.conf_ at all. (I don't remember specific kernel versions any more, but I believe that most or all of the 2.4 kernel series could work this way.) The first step away from this was to have software RAID arrays assembled in the initrd environment by explicitly running _mdadm_ from the _/init_ script, using a copy of _mdadm.conf_ that was embedded in the initrd image. I believe that the disk partition type no longer mattered (since _mdadm_ would normally probe all devices for [[RAID superblocks SoftwareRaidSuperblockFormats]]). It was possible to have [[explosive failures RaidGrowthGotcha]] if your _mdadm.conf_ did not completely match the state of critical RAID arrays. (I don't know if this stage would assemble RAID arrays not listed in your _mdadm.conf_ and I no longer have any systems I could use to check this.) The next state of moving boot time handling of software RAID out of the kernel is the situation we have today. [[As I described recently Ubuntu1204SoftwareRaidFail]], a modern Linux system does all assembly of software RAID arrays asynchronously through _udev_ (along with a great deal of other device discovery and handling). In order to have all of this magical _udev_ device handling happen in the initramfs environment too, your initramfs starts an instance of _udev_ quite early on and this instance is used to process boot-time device events and so on. This instance uses a subset of the regular rules for processing events, generally only covering what is considered important devices for booting your system. As we've seen, this process of assembling software RAID arrays is generally indifferent to whether or not the arrays are listed in _mdadm.conf_; I believe (but have not tested) that it also doesn't care about the partition type. (I think that the _udev_ process that the initramfs starts is later terminated and replaced by a _udev_ process started during real system boot.)