2024-11-01
Notes on the compatibility of crypted passwords across Unixes in late 2024
For years now, all sorts of Unixes have been able to support better
password 'encryption' schemes than the basic old crypt(3)
salted-mutant-DES approach that Unix started with (these days it's
usually called 'password hashing'). However, the support for specific
alternate schemes varies from Unix to Unix, and has for many years.
Back in 2010 I wrote some notes on the situation at the time; today I want to look at the situation
again, since password hashing is on my mind right now.
The most useful resource for cross-Unix password hash compatibility is Wikipedia's comparison table. For Linux, support varies by distribution based on their choice of C library and what version of libxcrypt they use, and you can usually see a list in crypt(5), and pam_unix may not support using all of them for new passwords. For FreeBSD, their support is documented in crypt(3). In OpenBSD, this is documented in crypt(3) and crypt_newhash(3), although there isn't much to read since current OpenBSD only lists support for 'Blowfish', which for password hashing is also known as bcrypt. On Illumos, things are more or less documented in crypt(3), crypt.conf(5), and crypt_unix(7) and associated manual pages; the Illumos section 7 index provides one way to see what seems to be supported.
System administrators not infrequently wind up wanting cross-Unix compatibility of their local encrypted passwords. If you don't care about your shared passwords working on OpenBSD (or NetBSD), then the 'sha512' scheme is you best bet; it basically works everywhere these days. If you do need to include OpenBSD or NetBSD, you're stuck with bcrypt but even then there may be problems because bcrypt is actually several schemes, as Wikipedia covers.
Some recent Linux distributions seem to be switching to 'yescrypt' by default (including Debian, which means downstream distributions like Ubuntu have also switched). Yescrypt in Ubuntu is now old enough that it's probably safe to use in an all-Ubuntu environment, although your distance may vary if you have 18.04 or earlier systems. Yescrypt is not yet available in FreeBSD and may never be added to OpenBSD or NetBSD (my impression is that OpenBSD is not a fan of having lots of different password hashing algorithms and prefers to focus on one that they consider secure).
(Compared to my old entry, I no longer particularly care about the non-free Unixes, including macOS. Even Wikipedia doesn't bother trying to cover AIX. For our local situation, we may someday want to share passwords to FreeBSD machines, but we're very unlikely to care about sharing passwords to OpenBSD machines since we currently only use them in situations where having their own stand-alone passwords is a feature, not a bug.)
2024-10-23
Doing basic policy based routing on FreeBSD with PF rules
Suppose, not hypothetically, that you have a FreeBSD machine that has two interfaces and these two interfaces are reached through different firewalls. You would like to ping both of the interfaces from your monitoring server because both of them matter for the machine's proper operation, but to make this work you need replies to your pings to be routed out the right interface on the FreeBSD machine. This is broadly known as policy based routing and is often complicated to set up. Fortunately FreeBSD's version of PF supports a basic version of this, although it's not well explained in the FreeBSD pf.conf manual page.
To make our FreeBSD machine reply properly to our monitoring machine's
ICMP pings, or in general to its traffic, we need a stateful 'pass' rule
with a 'reply-to
':
B_IF="emX" B_IP="10.x.x.x" B_GW="10.x.x.254" B_SUBNET="10.x.x.0/24" pass in quick on $B_IF \ reply-to ($B_IF $B_GW) \ inet from ! $B_SUBNET to $B_IP \ keep state
(Here $B_IP
is the machine's IP on this second interface, and
we also need the second interface, the gateway for the second
interface's subnet, and the subnet itself.)
As I discovered,
you must put the 'reply-to
' where it is here, although as far as
I can tell the FreeBSD pf.conf manual page will only tell you
that if you read the full BNF. If you put it at the end the way
you might read the text description, you will get only opaque
syntax errors.
We must specifically exclude traffic from the subnet itself to us,
because otherwise this rule will faithfully send replies to other
machines on the same subnet off to the gateway, which either won't
work well or won't work at all. You can restrict the PF rule more
narrowly, for example 'from { IP1 IP2 IP3 }
' if those are the
only off-subnet IPs that are supposed to be talking to your secondary
interface.
(You may also want to match only some ports here, unless you want
to give all incoming traffic on that interface the ability to talk
to everything on the machine. This may require several versions of
this rule, basically sticking the 'reply-to ...
' bit into every
'pass in quick on ...' rule you have for that interface.)
This PF rule only handles incoming connections (including implicit ones from ICMP and UDP traffic). If we want to be able to route our outgoing traffic over our secondary interface by selecting a source address when you do things, we need a second PF rule:
pass out quick \ route-to ($B_IF $B_GW) \ inet from $B_IP to ! $B_SUBNET \ keep state
Again we must specifically exclude traffic to our local network,
because otherwise it will go flying off to our gateway, and also
you can be more specific if you only want this machine to be able
to connect to certain things using this gateway and firewall (eg
'to { IP1 IP2 SUBNET3/24 }
', or you could use a port-based
restriction).
(The PF rule can't be qualified with 'on $B_IF
', because the
situation where you need this rule is where the packet would not
normally be going out that interface. Using 'on <the interface with
your default route's gateway>' has some subtle differences in the
semantics if you have more than two interfaces.)
Although you might innocently think otherwise, the second rule by
itself isn't sufficient to make incoming connections to the second
interface work correctly. If you want both incoming and outgoing
connections to work, you need both rules. Possibly it would work
if you matched incoming traffic on $B_IF
without keeping state.
2024-10-09
The history of inetd is more interesting than I expected
Inetd is a traditional Unix 'super-server' that listens on multiple (IP) ports and runs programs in response to activity on them. When inetd listens on a port, it can act in two different modes. In the simplest mode, it starts a separate copy of the configured program for every connection (much like the traditional HTTP CGI model), which is an easy way to implement small, low volume services but usually not good for bigger, higher volume ones. The second mode is more like modern 'socket activation'; when a connection comes in, inetd starts your program and passes it the master socket, leaving it to you to keep accepting and processing connections until you exit.
(In inetd terminology, the first mode is 'nowait' and the second is 'wait'; this describes whether inetd immediate resumes listening on the socket for connections or waits until the program exits.)
Inetd turns out to have a more interesting history than I expected, and it's a history that's entwined with daemonization, especially with how the BSD r* commands daemonize themselves in 4.2 BSD. If you'd asked me before I started writing this entry, I'd have said that inetd was present in 4.2 BSD and was being used for various low-importance services. This turns out to be false in both respects. As far as I can tell, inetd was introduced in 4.3 BSD, and when it was introduced it was immediately put to use for important system daemons like rlogind, telnetd, ftpd, and so on, which were surprisingly run in the first style (with a copy of the relevant program started for each connection). You can see this in the 4.3 BSD /etc/inetd.conf, which has the various TCP daemons and lists them as 'nowait'.
(There are still network programs that are run as stand-alone daemons, per the 4.3 BSD /etc/rc and the 4.3 BSD /etc/rc.local. If we don't count syslogd, the standard 4.3 BSD tally seems to be rwhod, lpd, named, and sendmail.)
While I described inetd as having two modes and this is the modern state, the 4.3 BSD inetd(8) manual page says that only the 'start a copy of the program every time' mode ('nowait') is to be used for TCP programs like rlogind. I took a quick read over the 4.3 BSD inetd.c and it doesn't seem to outright reject a TCP service set up with 'wait', and the code looks like it might actually work with that. However, there's the warning in the manual page and there's no inetd.conf entry for a TCP service that is 'wait', so you'd be on your own.
The corollary of this is that in 4.3 BSD, programs like rlogind don't have the daemonization code that they did in 4.2 BSD. Instead, the 4.3 BSD rlogind.c shows that it can only be run under inetd or some equivalent, as rlogind immediately aborts if its standard input isn't a socket (and it expects the socket to be connected to some other end, which is true for the 'nowait' inetd mode but not how things would be for the 'wait' mode).
This 4.3 BSD inetd model seems to have rapidly propagated into BSD-derived systems like SunOS and Ultrix. I found traces that relatively early on, both of them had inherited the 4.3 style non-daemonizing rlogind and associated programs, along with an inetd-based setup for them. This is especially interesting for SunOS, because it was initially derived from 4.2 BSD (I'm less sure of Ultrix's origins, although I suspect it too started out as 4.2 BSD derived).
PS: I haven't looked to see if the various BSDs ever changed this mode of operation for rlogind et al, or if they carried the 'per connection' inetd based model all through until each of them removed the r* commands entirely.
2024-10-08
OpenBSD kernel messages about memory conflicts on x86 machines
Suppose you boot up an OpenBSD machine that you think may be having problems, and as part of this boot you look at the kernel messages for the first time in a while (or perhaps ever), and when doing so you see messages that look like this:
3:0:0: rom address conflict 0xfffc0000/0x40000 3:0:1: rom address conflict 0xfffc0000/0x40000
Or maybe the messages are like this:
memory map conflict 0xe00fd000/0x1000 memory map conflict 0xfe000000/0x11000 [...] 3:0:0: mem address conflict 0xfffc0000/0x40000 3:0:1: mem address conflict 0xfffc0000/0x40000
This sounds alarming, but there's almost certainly no actual problem, and if you check logs you'll likely find that you've been getting messages like this for as long as you've had OpenBSD on the machine.
The short version is that both of these are reports from OpenBSD that it's finding conflicts in the memory map information it is getting from your BIOS. The messages that start with 'X:Y:Z' are about PCI(e) device memory specifically, while the 'memory map conflict' errors are about the general memory map the BIOS hands the system.
Generally, OpenBSD will report additional information immediately after about what the PCI(e) devices in question are. Here are the full kernel messages around the 'rom address conflict':
pci3 at ppb2 bus 3 3:0:0: rom address conflict 0xfffc0000/0x40000 3:0:1: rom address conflict 0xfffc0000/0x40000 bge0 at pci3 dev 0 function 0 "Broadcom BCM5720" rev 0x00, BCM5720 A0 (0x5720000), APE firmware NCSI 1.4.14.0: msi, address 50:9a:4c:xx:xx:xx brgphy0 at bge0 phy 1: BCM5720C 10/100/1000baseT PHY, rev. 0 bge1 at pci3 dev 0 function 1 "Broadcom BCM5720" rev 0x00, BCM5720 A0 (0x5720000), APE firmware NCSI 1.4.14.0: msi, address 50:9a:4c:xx:xx:xx brgphy1 at bge1 phy 2: BCM5720C 10/100/1000baseT PHY, rev. 0
Here these are two network ports on the same PCIe device (more or less), so it's not terribly surprising that the same ROM is maybe being reused for both. I believe the two messages mean that both ROMs (at the same address) are conflicting with another unmentioned allocation. I'm not sure how you find out what the original allocation and device is that they're both conflicting with.
The PCI related messages come from sys/dev/pci/pci.c and in current OpenBSD come in a number of variations, depending on what sort of PCI address space is detected as in conflict in pci_reserve_resources(). Right now, I see 'mem address conflict', 'io address conflict', the already mentioned 'rom address conflict', 'bridge io address conflict', 'bridge mem address conflict' (in several spots in the code), and 'bridge bus conflict'. Interested parties can read the source for more because this exhausts my knowledge on the subject.
The 'memory map conflict' message comes from a different place; for most people it will come from sys/arch/amd64/pci/pci_machdep.c, in pci_init_extents(). If I'm understanding the code correctly, this is creating an initial set of reserved physical address space that PCI devices should not be using. It registers each piece of bios_memmap, which according to comments in sys/arch/amd64/amd64/machdep.c is "the memory map as the bios has returned it to us". I believe that a memory map conflict at this point says that two pieces of the BIOS memory map overlap each other (or one is entirely contained in the other).
I'm not sure it's correct to describe these messages as harmless. However, it's likely that they've been there for as long as your system's BIOS has been setting up its general memory map and the PCI devices as it has been, and you'd likely see the same address conflicts with another system (although Linux doesn't seem to complain about it; I don't know about FreeBSD).
2024-10-05
Daemonization in Unix programs is probably about restarting programs
It's standard for Unix daemon programs to 'daemonize' themselves when they start, completely detaching from how they were run; this behavior is quite old and these days it's somewhat controversial and sometimes considered undesirable. At this point you might ask why programs even daemonize themselves in the first place, and while I don't know for sure, I do have an opinion. My belief is that daemonization is because of restarting daemon programs, not starting them at boot.
During system boot, programs don't need to daemonize in order to
start properly. The general Unix boot time environment has long
been able to detach programs into the background (although the
V7 /etc/rc
didn't bother to do this with /etc/update
and /etc/cron
, the
4.2BSD /etc/rc
did do this for the new BSD network daemons). In general, programs
started at boot time don't need to worry that they will be inheriting
things like stray file descriptors or a controlling terminal. It's
the job of the overall boot time environment to insure that they
start in a clean environment, and if there's a problem there you
should fix it centrally, not make it every program's job to deal
with the failure of your init
and boot sequence.
However, init is not a service manager (not historically), which meant that for a long time, starting or restarting daemons after boot was entirely in your hands with no assistance from the system. Even if you remembered to restart a program as 'daemon &' so that it was backgrounded, the newly started program could inherit all sorts of things from your login session. It might have some random current directory, it might have stray file descriptors that were inherited from your shell or login environment, its standard input, output, and error would be connected to your terminal, and it would have a controlling terminal, leaving it exposed to various bad things happening to it when, for example, you logged out (which often would deliver a SIGHUP to it).
This is the sort of thing that even very old daemonization code deals with, which is to say that it fixes.
The 4.2BSD daemonization code closes (stray) file descriptors and
removes any controlling terminal the process may have, in addition
to detaching itself from your shell (in case you forgot or didn't
use the '&' when starting it). It's also easy to see how people
writing Unix daemons might drift into adding this sort of code to
them as people restarted the daemons (by hand) and ran into the
various problems (cf).
In fact the 4.2BSD code for it is conditional on 'DEBUG
' not being
defined; presumably if you were debugging, say, rlogind, you'd build
a version that didn't detach itself on you so you could easily run
it under a debugger or whatever.
It's a bit of a pity that 4.2 BSD and its successors didn't create a general 'daemonize' program that did all of this for you and then told people to restart daemons with 'daemonize <program>' instead of '<program>'. But we got the Unix that we have, not the Unix that we'd like to have, and Unixes did eventually grow various forms of service management that tried to encapsulate all of the things required to restart daemons in one place.
(Even then, I'm not sure that old System V init systems would properly daemonize something that you restarted through '/etc/init.d/<whatever> restart', or if it was up to the program to do things like close extra file descriptors and get rid of any controlling terminal.)
PS: Much later, people did write tools for this, such as daemonize. It's surprisingly handy to have such a program lying around for when you want or need it.
2024-10-04
Traditionally, init on Unix was not a service manager as such
Init (the process) has historically had a number of roles but, perhaps surprisingly, being a 'service
manager' (or a 'daemon manager') was not one of them in traditional
init systems. In V7 Unix and continuing on into traditional 4.x
BSD, init (sort of) started various daemons by running /etc/rc, but
its only 'supervision' was of getty
processes for the console and
(other) serial lines. There was no supervision or management of
daemons or services, even in the overall init system (stretching
beyond PID 1, init itself). To restart a service, you killed its
process and then re-ran it somehow; getting even the command line
arguments right was up to you.
(It's conventional to say that init started daemons during boot, even though technically there are some intermediate processes involved since /etc/rc is a shell script.)
The System V init had a more general /etc/inittab
that could in
theory handle more than getty
processes, but in practice it wasn't
used for managing anything more than them. The System V init system
as a whole did have a concept of managing daemons and services, in
the form of its multi-file /etc/rc.d structure, but stopping and
restarting services was handled outside of the PID 1 init itself.
To stop a service you directly ran its init.d script with 'whatever
stop', and the script used various approaches to find the processes
and get them to stop. Similarly, (re)starting a daemon was done
directly by its init.d script, without PID 1 being involved.
As a whole system the overall System V init system was a significant improvement on the more basic BSD approach, but it (still) didn't have init itself doing any service supervision. In fact there was nothing that actively did service supervision even in the System V model. I'm not sure what the first system to do active service supervision was, but it may have been daemontools. Extending the init process itself to do daemon supervision has a somewhat controversial history; there are Unix systems that don't do this through PID 1, although doing a good job of it has clearly become one of the major jobs of the init system as a whole.
That init itself didn't do service or daemon management is, in my view, connected to the history of (process) daemonization. But that's another entry.
(There's also my entry on how init (and the init system as a whole) wound up as Unix's daemon manager.)
2024-10-03
(Unix) daemonization turns out to be quite old
In the Unix context, 'daemonization' means a program that totally detaches itself from how it was started. It was once very common and popular, but with modern init systems they're often no longer considered to be all that good an idea. I have some views on the history here, but today I'm going to confine myself to a much smaller subject, which is that in Unix, daemonization goes back much further than I expected. Some form of daemonization dates to Research Unix V5 or earlier, and an almost complete version appears in network daemons in 4.2 BSD.
As far back as Research Unix V5 (from 1974), /etc/rc is starting
/etc/update (which does a periodic sync()
) without explicitly
backgrounding it. This is the giveaway sign that 'update
' itself
forks and exits in the parent, the initial version of daemonization,
and indeed that's what we find in update.s (it
wasn't yet a C program). The V6 update is still in assembler, but
now the V6 update.s is
clearly not just forking but also closing file descriptors 0, 1,
and 2.
In the V7 /etc/rc, the new /etc/cron is also started without being explicitly put into the background. The V7 update.c seems to be a straight translation into C, but the V7 cron.d has a more elaborate version of daemonization. V7 cron forks, chdir's to /, does some odd things with standard input, output, and error, ignores some signals, and then starts doing cron things. This is pretty close to what you'd do in modern daemonization.
The first 'network daemons' appeared around the time of 4.2 BSD. The 4.2BSD /etc/rc explicitly backgrounds all of the r* daemons when it starts them, which in theory means they could have skipped having any daemonization code. In practice, rlogind.c, rshd.c, rexecd.c, and rwhod.c all have essentially identical code to do daemonization. The rlogind.c version is:
#ifndef DEBUG if (fork()) exit(0); for (f = 0; f < 10; f++) (void) close(f); (void) open("/", 0); (void) dup2(0, 1); (void) dup2(0, 2); { int tt = open("/dev/tty", 2); if (tt > 0) { ioctl(tt, TIOCNOTTY, 0); close(tt); } } #endif
This forks with the parent exiting (detaching the child from the process hierarchy), then the child closes any (low-numbered) file descriptors it may have inherited, sets up non-working standard input, output, and error, and detaches itself from any controlling terminal before starting to do rlogind's real work. This is pretty close to the modern version of daemonization.
(Today, the ioctl() stuff is done by calling setsid() and you'd probably want to close more than the first ten file descriptors, although that's still a non-trivial problem.)
2024-09-22
Old (Unix) workstations and servers tended to boot in the same ways
I somewhat recently read j. b. crawford's ipmi, where in a part crawford talks about how old servers of the late 80s and 90s (Unix and otherwise) often had various features for management like serial consoles. What makes something an old school 80s and 90s Unix server and why they died off is an interesting topic I have views on, but today I want to mention and cover a much smaller one, which is that this sort of early boot environment and low level management system was generally also found on Unix workstations.
By and large, the various companies making both Unix servers and Unix workstations, such as Sun, SGI, and DEC, all used the same boot time system firmware on both workstation models and server models (presumably partly because that was usually easier and cheaper). Since most workstations also had serial ports, the general consequence of this was that you could set up a 'workstation' with a serial console if you wanted to. Some companies even sold the same core hardware as either a server or workstation depending on what additional options you put in it (and with appropriate additional hardware you could convert an old server into a relatively powerful workstation).
(The line between 'workstation' and 'server' was especially fuzzy for SGI hardware, where high end systems could be physically big enough to be found in definite server-sized boxes. Whether you considered these 'servers with very expensive graphics boards' or 'big workstations' could be a matter of perspective and how they were used.)
As far as the firmware was concerned, generally what distinguished a 'server' that would talk to its serial port to control booting and so on from a 'workstation' that had a graphical console of some sort was the presence of (working) graphics hardware. If the firmware saw a graphics board and no PROM boot variables had been set, it would assume the machine was a workstation; if there was no graphics hardware, you were a server.
As a side note, back in those days 'server' models were not necessarily rack-mountable and weren't always designed with the 'must be in a machine room to not deafen you' level of fans that modern servers tend to be found with. The larger servers were physically large and could require special power (and generate enough noise that you didn't want them around you), but the smaller 'server' models could look just like a desktop workstation (at least until you counted up how many SCSI disks were cabled to them).
Sidebar: An example of repurposing older servers as workstations
At one point, I worked with an environment that used DEC's MIPS-based DECstations. DEC's 5000/2xx series were available either as a server, without any graphics hardware, or as a workstation, with graphics hardware. At one point we replaced some servers with better ones; I think they would have been 5000/200s being replaced with 5000/240s. At the time I was using a DECstation 3100 as my system administrator workstation, so I successfully proposed taking one of the old 5000/200s, adding the basic colour graphics module, and making it my new workstation. It was a very nice upgrade.
2024-09-19
OpenBSD versus FreeBSD pf.conf syntax for address translation rules
I mentioned recently that we're looking at FreeBSD as a potential replacement for OpenBSD for our PF-based firewalls (for the reasons, see that entry). One of the things that will determine how likely we are to try this is how similar the pf.conf configuration syntax and semantics are between OpenBSD pf.conf (which all of our current firewall rulesets are obviously written in) and FreeBSD pf.conf (which we'd have to move them to). I've only done preliminary exploration of this but the news has been relatively good so far.
I've already found one significant syntax (and to some extent semantics) difference between the two PF ruleset dialects, which is that OpenBSD does BINAT, redirection, and other such things by means of rule modifiers; you write a 'pass' or a 'match' rule and add 'binat-to', 'nat-to', 'rdr-to', and so on modifiers to it. In FreeBSD PF, this must be done as standalone translation rules that take effect before your filtering rules. In OpenBSD PF, strategically placed (ie early) 'match' BINAT, NAT, and RDR rules have much the same effect as FreeBSD translation rules, causing your later filtering rules to see the translated addresses; however, 'pass quick' rules with translation modifiers combine filtering and translation into one thing, and there's not quite a FreeBSD equivalent.
That sounds abstract, so let's look at a somewhat hypothetical OpenBSD RDR rule:
pass in quick on $INT_IF proto {udp tcp} \ from any to <old-DNS-IP> port = 53 \ rdr-to <new-DNS-IP>
Here we want to redirect traffic to our deprecated old DNS resolver IP to the new DNS IP, but only DNS traffic.
In FreeBSD PF, the straightforward way would be two rules:
rdr on $INT_IF proto {udp tcp} \ from any to <old-DNS-IP> port = 53 \ -> <new-DNS-IP> port 53 pass in quick on $INT_IF proto {udp tcp} \ from any to <new-DNS-IP> port = 53
In practice we would most likely already have the 'pass in' rule, and also you can write 'rdr pass' to immediately pass things and skip the filtering rules. However, 'rdr pass' is potentially dangerous because it skips all filtering. Do you have a single machine that is just hammering your DNS server through this redirection and you want to cut it off? You can't add a useful 'block in quick' rule for it if you have a 'rdr pass', because the 'pass' portion takes effect immediately. There are ways to work around this but they're not quite as straightforward.
(Probably this alone would push us to not using 'rdr pass'; there's also the potential confusion of passing traffic in two different sections of the pf.conf ruleset.)
Fortunately we have very few non-'match' translation rules. Turning OpenBSD 'match ... <whatever>-to <ip>' pf.conf rules into the equivalent FreeBSD '<whatever> ...' rules seems relatively mechanical. We'd have to make sure that the IP addresses our filtering rules saw continued to be the internal ones, but I think this would be work out naturally; our firewalls that do NAT and BINAT translation do it on their external interfaces, and we usually filter with 'pass in' rules.
(There may be more subtle semantic differences between OpenBSD and FreeBSD pf rules. A careful side by side reading of the two pf.conf manual pages might turn these up, but I'm not sure I can read the two manual pages that carefully.)
2024-08-25
How to talk to a local IPMI under FreeBSD 14
Much like Linux and OpenBSD, FreeBSD is able to talk to a local
IPMI using the ipmi
kernel driver (or device, if you
prefer). This is imprecise although widely understood terminology;
in more precise terms, FreeBSD
can talk to a machine's BMC (Baseboard Management Controller) that
implements the IPMI specification in various ways which you seem
to normally not need to care about (for information on 'KCS' and
'SMIC', see the "System Interfaces" section of OpenBSD's ipmi(4)
).
Unlike in OpenBSD (covered earlier), the stock
FreeBSD 14 kernel appears to report no messages if your machine has
an IPMI interface but the driver hasn't been enabled in the kernel.
To see if your machine has an IPMI interface that FreeBSD can talk
to, you can temporarily load the ipmi module with 'kldload ipmi
'. If this
succeeds, you will see kernel messages that might look like this:
ipmi0: <IPMI System Interface> port 0xca8,0xcac irq 10 on acpi0 ipmi0: KCS mode found at io 0xca8 on acpi ipmi0: IPMI device rev. 1, firmware rev. 7.10, version 2.0, device support mask 0xdf ipmi0: Number of channels 2 ipmi0: Attached watchdog ipmi0: Establishing power cycle handler
(On the one Dell server I've tried this on so far, the ipmi(4)
driver found the IPMI
without any special parameters.)
At this point you should have a /dev/ipmi0 device and you can 'pkg
install ipmitool
' and talk to your IPMI. To make this permanent, you
edit /boot/loader.conf
to load the driver on boot, by adding:
ipmi_load="YES"
While you're there, you may also want to load the coretemp(4)
module or perhaps
amdtemp(4)
.
After updating loader.conf, you need to reboot to make it take full
effect, although since you can kldload everything before then I
don't think there's a rush.
In FreeBSD, IPMI sensor information isn't visible in sysctl (although
information from coretemp or amdtemp is). You'll need ipmitool
or another suitable program to query it. You can also use ipmitool
to configure the basics of the IPMI's networking and set the IPMI
administrator's password to something you know, as opposed to
whatever unique value the machine's vendor set it to, which you may
or may not have convenient access to.
(As far as I can tell, ipmitool works the same on FreeBSD as it does on Linux, so if you have existing scripts and so on that use it for collecting data on your Linux hosts (as we do), they will probably be easy to make work on any FreeBSD machines you add.)