Understanding something about udev's normal network device names on Linux

July 12, 2021

For a long time, systemd's version of udev has attempted to give network interfaces what the systemd people call predictable or stable names. The current naming scheme is more or less documented in systemd.net-naming-scheme, with an older version in their Predictable Netwwork Interface Names wiki page. To understand how the naming scheme is applied in practice by default, you also need to read the description of NamePolicy= in systemd.link(5), and inspect the default .link file, '99-default.link', which might be in either /lib/systemd/network or /usr/lib/systemd/network/. It appears that the current network name policy is generally going to be "kernel database onboard slot path", possibly with 'keep' at the front in addition. In practice, on most servers and desktops, most network devices will be named based on their PCI slot identifier, using systemd's 'path' naming policy.

A PCI slot identifier is what ordinary 'lspci' will show you as the PCIe bus address. As covered in the lspci manpage, the fully general form of a PCIe bus address is <domain>:<bus>:<device>.<function>, and on many systems the domain is always 0000 and is omitted. Systemd turns this into what it calls a "PCI geographical location", which is (translated into lspci's terminology):

prefix [Pdomain] pbus sdevice [ffunction] [nphys_port_name | ddev_port]

The domain is omitted if it's 0 and the function is only present if it's a multi-function device. All of the numbers are in decimal, while lspci presents them in hex. For Ethernet devices, the prefix is 'en'.

(I can't say anything about the 'n' and 'd' suffixes because I've never seen them in our hardware.)

The device portion of the PCIe bus address is very frequently 0, because many Ethernet devices are behind PCIe bridges in the PCIe bus topology. This is how my office workstation is arranged, and how almost all of our servers are. The exceptions are all on bus 0, the root bus, which I believe means that they're directly integrated into the core chipset. This means that in practice the network device name primarily comes from the PCI bus number, possibly with a function number added. This gives 'path' based names of, eg, enp6s0 (bus 6, device 0) or enp1s0f0 and enp1s0f1 (bus 1, device 0, function 0 or 1; this is a dual 10G-T card, with each port being one function).

(Onboard devices on servers and even desktops are often not integrated into the core chipset and thus not on PCIe bus 0. Udev may or may not recognize them as onboard devices and assign them 'eno<N>' names. Servers from good sources will hopefully have enough correct DMI and other information so that udev can do this.)

As always, the PCIe bus ordering doesn't necessarily correspond to what you think of as the actual order of hardware. My office workstation has an onboard Ethernet port on its ASUS Prime X370-Pro motherboard and an Intel 1G PCIe card, but they are (or would be) enp8s0 and enp6s0 respectively. So my onboard port has a higher PCIe bus number than the PCIe card.

There is an important consequence of this, which is that systemd's default network device names are not stable if you change your hardware around, even if you didn't touch the network card itself. Changing your hardware around can change your PCIe bus numbers, and since the PCIe bus number is most of what determines the network interface name, it will change. You don't have to touch your actual network card for this to happen; adding, changing, or relocating other hardware between physical PCIe slots can trigger changes in bus addresses (primarily if PCIe bridges are added or removed).

(However, adding or removing hardware won't necessarily change existing PCIe bus addresses even if the hardware changed has a PCIe bridge. It all depends on your specific PCIe topology.)

Sidebar: obtaining udev and PCIe topology information

Running 'udevadm info /sys/class/net/<something>' will give you a dump of what udev thinks and knows about any given network interface. The various ID_NET_NAME_* properties give you the various names that udev would assign based on that particular naming policy. The 'enp...' names are ID_NET_NAME_PATH, and on server hardware you may also see ID_NET_NAME_ONBOARD.

(The 'database' naming scheme comes from information in hwdb.)

On modern systems, 'lspci -PP' can be used to show the full PCIe path to a device (or all devices). On Ubuntu 18.04, you can also use sysfs to work through your PCIe topology, in addition to 'lspci -tv'. See also my entry on PCIe bus addresses, lspci, and working out your PCIe bus topology.


Comments on this page:

From 45.72.234.93 at 2021-07-12 07:19:44:

Is systemd/udev more or less stable than the (Open)BSD convention of [driver name][0-9]?

By Vincent Bernat at 2021-07-12 08:49:39:

enoX interfaces are often on PCI bus 0 too. The difference is that they have a SMBIOS type 41 entry ( dmidecode -t 41 ), or an ACPI DSM entry (the PCI device gets an acpi_index entry in sysfs). Being hosted on the chipset or not does not matter.

By cks at 2021-07-12 10:15:33:

On our mix of Dell and SuperMicro servers (and on a couple of desktops), almost nothing has its onboard Ethernet interfaces on PCI bus 0, even when DMI information allows it to be recognized as onboard. There are a few machines (including my home desktop) with an Intel chipset where the single Ethernet port is on bus 0, but that's apparently it.

(The champions of 'not on PCI bus 0' are some SuperMicro servers where the onboard 10G-T Ethernets are all the way up on PCI bus 0x67.)

As far as the relative stability of the OpenBSD naming scheme goes, I don't know. The OpenBSD scheme has the advantage that if you mix different types of network hardware and you only have one of each, your numbering is probably more stable. But otherwise it has the usual PC hardware problem of deciding what is the 'first' device to start numbering from. If it bases this on PCI bus addresses and you add more of the same hardware in a way that gives it a lower PCI bus address, I would expect it to renumber on you.

(My office machine shows this is possible, since the Intel 1G PCIe card has a lower PCI bus address than the onboard port. If they both used the same driver and you added the card later, I would expect problems even on OpenBSD.)

Written on 12 July 2021.
« Why Bash and GNU Readline's "bracketed paste" mode is not for us
Problems in the way of straightforward device naming in operating systems »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Mon Jul 12 00:16:04 2021
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.