It's a good idea to label all of the drives in your desktop
I was doing some drive shuffling with my office workstation today, so opening it up reminded me that when I originally built it up, I did the wise thing of putting labels on all of the drives in it (both the spinning rust hard drives and the SSDs). We generally don't label the drives themselves on modern servers, because most servers have their backplane or drive cables hardwired so that a drive in a given spot in the chassis is always going to be the same disk as Linux sees it. This isn't true on most desktops, where you get to run the cables yourself in any way that you want and then play the game of finding out what order your motherboard puts ports in (which is often not the order you expect; motherboards can be wired up in quite peculiar ways).
(As we've found out, there are good reasons to label the front of the server chassis with what disk is where and what a particular disk is for, especially if the disks aren't in a straightforward order. In some cases, you may want to keep a printed index of what drive is where. But that's separate from labeling the drive itself inside the carrier or chassis.)
We have a labelmaker (as should everyone), so that's what I use to label all of the drives. My current practice in labels is to label each drive with the host it's in, its normal Linux disk name (like 'sda'), and what important filesystems (or ZFS pools, or both) are on the disk. I will also sometimes label drives as 'disk 0', 'disk 1', and so on. I have two goals with all of this labeling. When a drive is in my machine, I want to be able to see which drive it is, so that if I know that 'sdb' has died (or I want to replace it), I know what drive to uncable, remove, and so on. When I pull a drive out of my machine, either temporarily or permanently, I want to know where it came from and what it has or had on it, rather than have the drive wind up as yet another mysterious and anonymous drive sitting around (I have more than enough of those as it is).
(I'm not entirely sure what my goal was with my 'disk 0' and 'disk 1' labels. I think I wanted to keep track of which part of a software RAID array the drive had been, not just which one it had been a part of.)
Much like labeling bad hardware, I should probably put an additional label on removed drives with the date that I pulled them out. If they failed, I should obviously label that too (sometimes I pull drives because I'm replacing them with better ones, which is the current case).
Unfortunately there's one sort of drive that you can't currently really label, and that's NVMe drives; unlike normal drives, they don't really have a case to put a label on. My new NVMe drives have a manufacturer's sticker over parts of the drive, but I don't want to put a label on top of any part of it for various reasons. Right now I'm just hoping that Linux and motherboards order NVMe drives in a sensible way (although I should check that).
PS: I haven't been entirely good about this on my home machine. At some point I'll be shuffling disks around on it and I should make sure everything is fully labeled then.
(This entry elaborates on something I mentioned in passing at the bottom of my entry on labeling bad hardware. Since I was making new labels for some new drives today, the issue is on my mind.)