A bit on what Unix system pre-boot environments used to look like

December 5, 2021

Unix was first implemented on general purpose computers that hadn't been specifically designed for it, such as the PDP-11 and the DEC Vax. These machines could have intricate start up procedures (also) and in any case their pre-boot environment wasn't designed for Unix specifically. This changed in the early 1980s when computers both got more complex and began to be designed specifically for Unix, such as the Sun-1. These Unix computers, designed and built by Unix vendors who integrated their hardware and their Unix, soon got increasingly sophisticated and Unix specific pre-boot environments. The most well known and commonly experienced of these Unix machine pre-boot environments is probably Sun Microsystem's OpenBoot (later Open Firmware).

Broadly speaking, this pre-boot firmware tended to have three jobs. First, it had to configure and bring up the low level hardware, doing things like booting the CPU, enabling DRAM refresh, and any other basic hardware setup work required. Often the firmware would also do power on self tests, sometimes very time consuming ones; some SGI servers that we used to have could take five minutes to complete this phase of boot (and they weren't particularly big servers). Second, the firmware loaded and started the vendor's Unix kernel itself, possibly passing various hardware information to it. In the height of the Unix era, this was a complex job; the kernel could be found in any number of devices, would have to be read from the (Unix) filesystem when it was on a local disk, might have different default arguments or a default name, and you could be network booting the machine, which required even configuring Ethernet hardware and talking protocols like BOOTP or DHCP.

(As part of being able to read the kernel from disk, the firmware naturally understood how the Unix vendor chose to implement disk slices and partitioning. There was no standard for this, although I think there were common approaches inherited from the historic Unix variant the vendor's Unix was derived from.)

The broad third job of the firmware was debugging the kernel, including forcing a reboot when it was hung. Most Unix machines let you break into a firmware debugger from the console while the system was running, which would let you poke around at machine state and often force a crash dump. My memory is that crash dumps called the kernel to do the actual work, but there may have been firmware that could write out memory to a designated disk area on its own.

(On Unix workstations, the firmware typically could work with the graphical display to write text over top of your windowing session. Unix workstations typically didn't have the separation between text mode and graphics modes that x86 PCs wound up with.)

Although all Unix firmware was capable of booting on its own if you let it sit (and had set it up right), it generally gave you a command line environment that you could break into to change things like what would be booted from where. On workstations, the Unix firmware would generally talk to either or both of a hardware console and a serial connection; on servers (which in that era were headless, without video output), the firmware only talked to a serial interface. Naturally you could configure the baud rate and so on of the serial interface in the firmware settings. Firmware settings tended to be represented as some form of environment variables.

I believe that some of the modern free BSDs have x86 second stage boot environments that are broadly similar to the old Unix firmware environments (such as OpenBSD's boot(8); see also the FreeBSD boot process).

The x86 PC BIOS does the same job of early hardware initialization that Unix pre-boot firmware did, but its traditional way of booting things is much more primitive (although also much more general). And obviously PC BIOSes haven't tended to offer command lines, instead having some kind of 'graphical' user interface (first using text mode graphics, then later real pixel graphics). Modern UEFI BIOSes have many of the general features of Unix firmware, such as firmware variables, extensive firmware services, and loading the operating system (or a next stage boot environment) from a real filesystem, but they still don't have a command line or the kind of full bore support for serial consoles that Unix firmware tended to have.

(This was inspired from an old question on the fediverse (also). For the curious, there are videos of old Unix hardware booting up and old firmware manuals online.)

Comments on this page:

By Stephen Kitt at 2021-12-06 02:10:43:

Regarding UEFI systems, you can install an EFI shell and some motherboards ship with one in their firmware. I also have at least one UEFI system which can boot with a serial console (it has a web-based BMC interface too which I find more convenient).

Written on 05 December 2021.
« Most computer standards broadly require good faith implementations
Linux's vm.admin_reserve_kbytes sysctl is both not big enough and not sufficient »

Page tools: View Source, View Normal, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Sun Dec 5 22:49:20 2021
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.