Understanding something about udev's normal network device names on Linux
For a long time, systemd's version of udev has attempted to give
network interfaces what the systemd people call predictable or
stable names. The current naming scheme is more or less documented
in systemd.net-naming-scheme,
with an older version in their Predictable Netwwork Interface
Names
wiki page. To understand how the naming scheme is applied in
practice by default, you also need to read the description of
NamePolicy=
in systemd.link(5), and
inspect the default .link file, '99-default.link', which might be
in either /lib/systemd/network or /usr/lib/systemd/network/. It
appears that the current network name policy is generally going to
be "kernel database onboard slot path
", possibly with 'keep
'
at the front in addition. In practice, on most servers and desktops,
most network devices will be named based on their PCI slot identifier,
using systemd's 'path
' naming policy.
A PCI slot identifier is what ordinary 'lspci
' will show you as
the PCIe bus address. As covered in the lspci
manpage, the fully
general form of a PCIe bus address is <domain>:<bus>:<device>.<function>,
and on many systems the domain is always 0000 and is omitted. Systemd
turns this into what it calls a "PCI geographical location", which is
(translated into lspci's terminology):
prefix [Pdomain] pbus sdevice [ffunction] [nphys_port_name | ddev_port]
The domain is omitted if it's 0 and the function is only present
if it's a multi-function device. All of the numbers are in decimal,
while lspci
presents them in hex. For Ethernet devices, the prefix
is 'en
'.
(I can't say anything about the 'n
' and 'd
' suffixes because
I've never seen them in our hardware.)
The device portion of the PCIe bus address is very frequently 0, because many Ethernet devices are behind PCIe bridges in the PCIe bus topology. This is how my office workstation is arranged, and how almost all of our servers are. The exceptions are all on bus 0, the root bus, which I believe means that they're directly integrated into the core chipset. This means that in practice the network device name primarily comes from the PCI bus number, possibly with a function number added. This gives 'path' based names of, eg, enp6s0 (bus 6, device 0) or enp1s0f0 and enp1s0f1 (bus 1, device 0, function 0 or 1; this is a dual 10G-T card, with each port being one function).
(Onboard devices on servers and even desktops are often not integrated
into the core chipset and thus not on PCIe bus 0. Udev may or may
not recognize them as onboard devices and assign them 'eno<N>
'
names. Servers from good sources will hopefully have enough correct
DMI and other information so that udev can do this.)
As always, the PCIe bus ordering doesn't necessarily correspond to what you think of as the actual order of hardware. My office workstation has an onboard Ethernet port on its ASUS Prime X370-Pro motherboard and an Intel 1G PCIe card, but they are (or would be) enp8s0 and enp6s0 respectively. So my onboard port has a higher PCIe bus number than the PCIe card.
There is an important consequence of this, which is that systemd's default network device names are not stable if you change your hardware around, even if you didn't touch the network card itself. Changing your hardware around can change your PCIe bus numbers, and since the PCIe bus number is most of what determines the network interface name, it will change. You don't have to touch your actual network card for this to happen; adding, changing, or relocating other hardware between physical PCIe slots can trigger changes in bus addresses (primarily if PCIe bridges are added or removed).
(However, adding or removing hardware won't necessarily change existing PCIe bus addresses even if the hardware changed has a PCIe bridge. It all depends on your specific PCIe topology.)
Sidebar: obtaining udev and PCIe topology information
Running 'udevadm info /sys/class/net/<something>
' will give you
a dump of what udev thinks and knows about any given network
interface. The various ID_NET_NAME_*
properties give you the
various names that udev would assign based on that particular
naming policy. The 'enp...' names are ID_NET_NAME_PATH
, and
on server hardware you may also see ID_NET_NAME_ONBOARD
.
(The 'database
' naming scheme comes from information in
hwdb.)
On modern systems, 'lspci -PP
' can be used to show the full PCIe
path to a device (or all devices). On Ubuntu 18.04, you can also
use sysfs to work through your PCIe topology,
in addition to 'lspci -tv
'. See also my entry on PCIe bus
addresses, lspci
, and working out your PCIe bus topology.
Comments on this page:
|
|