In Fedora, your initramfs contains a copy of your sysctl settings
It all started when I discovered that my office workstation had
wound up with its maximum PID value set to a very large number (as
mentioned in passing in this entry). I managed
to track this down to a sysctl.d
file from Fedora's
ceph-osd RPM package, which I had installed
for reasons that are not entirely clear to me. That was straightforward.
So I removed the package, along with all of the other ceph packages,
and rebooted for other reasons. To my surprise, this didn't
change the setting;
I still had a
kernel.pid_max value of 4194304. A bunch of head
scratching ensued, including extreme measures like downloading and
checking the Fedora systemd source. In the end,
the culprit turned out to be my initramfs.
In Fedora, dracut copies sysctl.d files into your initramfs when it builds one (generally when you install a kernel update), and there's nothing that forces an update or rebuild of your initramfs when something modifies what sysctl.d files the system has or what they contain. Normally this is relatively harmless; you will have sysctl settings applied in the initramfs and then reapplied when sysctl runs a second time as the system is booting from your root filesystem. If you added new sysctl.d files or settings, they won't be in the initramfs but they'll get set the second time around. If you changed sysctl settings, the initramfs versions of the sysctl.d files will set the old values but then your updated settings will get set the second time around. But if you removed settings, nothing can fix that up; the old initramfs version of your sysctl.d file will apply the setting, and nothing will override it later.
(In Fedora 27's Dracut, this is done by a core systemd related Dracut module in /usr/lib/dracut/modules.d, 00systemd/module-setup.sh.)
It's my view that this behavior is dangerous. As this incident and others have demonstrated, any time that normal system files get copied into initramfs, you have the chance that the live versions will get out of sync with the versions in initramfs and then you can have explosions. The direct consequence of this is that you should strive to put as little in initramfs as possible, in order to minimize the chances of problems and confusion. Putting a frozen copy of sysctl.d files into the initramfs is not doing this. If there are sysctl settings that have to be applied in order to boot the system, they should be in a separate, clearly marked area and only that area should go in the initramfs.
(However, our Ubuntu 16.04 machines don't have sysctl.d files in their initramfs, so this behavior isn't universal and probably isn't required by either systemd or booting in general.)
Since that's not likely to happen any time soon, I guess I'm just going to have to remember to rebuild my initramfs any time I remove a sysctl setting. More broadly, I should probably adopt a habit of preemptively rebuilding my initramfs any time something inexplicable is going on, because that might be where the problem is. Or at least I should check what the initramfs contains, just in case Fedora's dracut setup has decided to captured something.
(It's my opinion that another sign that this is a bad idea in general is there's no obvious package to file a bug against. Who is at fault? As far as I know there's no mechanism in RPM to trigger an action when files in a certain path are added, removed, or modified, and anyway you don't necessarily want to rebuild an initramfs by surprise.)
PS: For extra fun you actually have multiple initramfses; you have one per installed kernel. Normally this doesn't matter because you're only using the latest kernel and thus the latest initramfs, but if you have to boot an earlier kernel for some reason the files captured in its initramfs may be even more out of date than you expect.
Some questions I have about DDR4 RAM speed and latency in underclocked memory
Suppose, not hypothetically, that you're putting together an Intel Core i7 based machine, specifically an i7-8700, and you're not planning to overclock. All Coffee Lake CPUs have an officially supported maximum memory rate of 2666 MHz (regardless of how many DIMMs or what sort of DIMM they are, unlike Ryzens), so normally you'd just buy some suitable DDR4 2666 MHz modules. However, suppose that the place you'd be ordering from is out of stock on the 2666 MHz CL15 modules you'd normally get, but has faster ones, say 3000 MHz CL15, for essentially the same price (and these modules are on the motherboard's qualified memory list).
At this point I have a bunch of questions, because I don't know what you can do if you use these higher speed DDR4-3000 CL15 DIMMs in a system. I can think of a number of cases that might be true:
- The DIMMs operate as DDR4-2666 CL15 memory. Their faster speed does
nothing for you now, although with a future CPU and perhaps a future
motherboard they would speed up.
(Alternately, perhaps underclocking the DIMMs has some advantage, maybe slightly better reliability or slightly lower power and heat.)
- The DIMMs can run at 2666 MHz but at a lower latency, say CL14,
since DDR4-3000 CL15 has an absolute time latency of 10.00 ns and
2666 MHz CL14 is over that at 10.5 ns (if I'm doing the math
This might require activating an XMP profile in the BIOS, or it might happen automatically if what matters to this stuff is the absolute time involved, not the nominal CLs. However, according to the Wikipedia entry on CAS latency, synchronous DRAM cares about the clock cycles involved and so CL15 might really be CL15 even if when you're underclocking your memory. DDR4 is synchronous DRAM.
- The DIMMs can run reliably at memory speeds faster than 2666 MHz,
perhaps all the way up to their rated 3000 MHz; this doesn't count
as CPU overclocking and is fine on the non-overclockable i7-8700.
(One possibility is that any faster than 2666 MHz memory listed on the motherboard vendor's qualified memory list is qualified at its full speed and can be run reliably at that speed, even on ordinary non-overclockable i7 CPUs. That would be nice, but I'm not sure I believe the PC world is that nice.)
- The system can be 'overclocked' to run the DIMMs faster than 2666
MHz (but perhaps not all the way to the rated 3000 MHz), even on
an i7-8700. However this is actual overclocking of the overall
system (despite it being within the DIMMs' speed rating), is not
necessarily stable, and the usual caveats apply.
- You need an overclockable CPU such as an i7-8700K in order to run memory any faster than the officially supported 2666 MHz. You might still be able to run DDR4-3000 CL15 at 2666 MHz CL14 instead of CL15 on a non-overclockable CPU, since the memory frequency is not going up, the memory is just responding faster.
Modern DIMMs apparently generally come with XMP profile(s) (see also the wikichip writeup) that let suitable BIOSes more or less automatically run them at their official rated speed, instead of the official JEDEC DDR4 standard speeds. Interestingly, based on the Wikipedia JEDEC table even DDR4-2666 CL15 is not JEDEC standard; the fasted DDR4-2666 CL the table lists is CL17. This may mean that turning on an XMP profile is required merely to get 2666 MHz CL15 even with plain standard DDR4-2666 CL15 DIMMs. That would be weird, but PCs are full of weird things. One interesting potential consequence of this could be that if you have DDR4-3000 CL15 DIMMs, you can't easily run them at 2666 MHz CL15 instead of 2666 MHz CL17 because the moment you turn on XMP they'll go all the way up to their rated 3000 MHz CL15.
(I learn something new every time I write an entry like this.)
PS: People say that memory speed isn't that important, but I'm not sure I completely believe them and anyway, if I wind up with DIMMs rated for more than 2666 MHz I'd like to know what they're good for (even if the answer is 'nothing except being available now instead of later'). And if one can reliably get somewhat lower latency and faster memory for peanuts, well, it's at least a bit tempting.