Console blanking now defaults to off on Linux (and has for a while)
For a long time, if you left a Linux machine sitting idle at a text console, for example on a server, the kernel would blank the display after a while. Years ago I wrote an entry about how you wanted to turn this off on your Linux servers, where at the time the best way to do this was a kernel parameter. For reasons beyond the scope of this entry, I recently noticed that we were not setting this kernel parameter on our Ubuntu 18.04 servers yet I knew that they weren't blanking their consoles.
(Until I looked at their /proc/cmdline, I thought we had just set
consoleblank=0' as part of their standard kernel command line
It turns out that the kernel's default behavior here changed back in 2017, ultimately due to this Ubuntu bug report. That bug led to this kernel change (which has a nice commit message explaining everything), which took it from an explicit ten minutes to implicitly being disabled (a C global variable without an explicit initializer is zero). Based on some poking at the git logs, it appears that this was introduced in 4.12, which means that it's in Ubuntu 18.04's kernel but not 16.04's.
(You can tell what the current state of this timeout is on any given machine by looking at /sys/module/kernel/parameters/consoleblank. It's 0 if this is disabled, and otherwise the number of seconds before the text console blanks.)
We have remaining Ubuntu 16.04 machines but they're all going away within a few months (one way or another), so it's not worth fixing their console blanking situation now that I've actually noticed it. Working from home due to ongoing events makes that a simpler choice, since if a machine locks up we're not going to go down to the machine room to plug in a monitor and look at its console; we're just going to remotely power cycle it as the first step.
(Our default kernel parameters tend to have an extremely long lifetime. We're still automatically setting a kernel parameter to deal with a problem we ran into ino Ubuntu 12.04. At this point I have no idea if that old problem still happens on current kernels, but we might as well leave it there just in case.)
It's possible that the real size of different SSDs is now consistent
Let's start with what I tweeted:
Our '2 TB' SSDs seem to have a remarkably consistent size as reported by Linux, unlike past HDs (3907029168 512-byte sectors). I wonder if this is general or just luck (or if WD and Crucial/Micron are closely connected).
Back in the days, one of the issues that sysadmins faced in redundant storage setups was that different models of hard drives of the same nominal size (such as 2 TB or 750 GB) might not have exactly the same number of sectors. This could cause you heartburn if you had let your storage system use all of the sectors on the drive, and then it failed and you had to replace it with a different model that might be rated as '2 TB' but had a few less sectors than the previous drive. To deal with this in our fileservers, we use a carefully calculated scheme based on using no more than the advertised amount of drive space.
But, well, now that we're using SSDs it's not clear if that's necessary any more. I have convenient access to '2 TB' SSDs from Crucial/Micron and WD, and somewhat to my surprise all of them have exactly the same size in sectors. That all of the different models of Crucial and Micron 2 TB SSDs are exactly the same size is not too surprising, because they're the same company. That WD SSDs are also the same size is a bit surprising; for HDs, I would have expected some differences.
At this point I don't know if this is just a coincidence or if it's generally the case that most or all X-size SSDs from major providers will have the same underlying size. If I was energetic, I would try to see if someone had a (Linux) database of SSD models and their exact reported number of sectors, perhaps gathered as part of getting a database of general SMART information for various drives (the smartmontools website doesn't seem to have any pointers to such a thing).
If this is really the case for SSDs, one of the things that may be going on is that SSDs are made from much more uniform underlying hardware than HDs were, since I believe the actual flash memory chips come in very standard sizes (all powers of two as far as I know). There's no room for tweaking sector or track density on magnetic platters any more; you get what you get (although the amount of space taken by error checking codes may still vary). However, this is probably not the full story.
On the one hand, all SSDs are over-provisioned on flash memory by some amount and you might expect different companies to pick different amounts of over-provisioning. On the other hand, there is very little advantage in consumer drives to having a little bit more extra space than your competitors, because you are still going to round it down to a nice even number for marketing. Possibly everyone is just copying the amount of space that the first person to sell a X-size SSD picked, because there is no reason not to and it makes everyone's life slightly easier.
(The reported size in sectors is also a little bit odd for our 2 TB SSDs; it comes out to about 398 decimal MB extra over and above decimal 2 TB.)