2008-01-30
Linux's IP forwarding settings summarized
Unlike many Unixes, Linux determines whether or not it does IP
forwarding on an interface by interface basis, based on interface the
packet arrived on. While Linux has a global IP forwarding sysctl,
net.ipv4.ip_forward, pretty much all this really does is (re)set the
value for all of the interfaces and make it the default; you can still
change individual interfaces later.
The fine controls are in /proc/sys/net/ipv4/conf/, where things
go like this:
interface/forwarding: controls whether incoming packets on interface can get forwarded or not.(I believe that the setting for the
lointerface does nothing, since locally generated packets are always routed.)all/forwarding: setting this is the same as setting the global sysctl.default/forwarding: controls the default state of forwarding; this state gets used by interfaces that have not set a specific value. Setting the global sysctl counts as giving all existing interfaces a specific value.
For most purposes I suspect that you want to ignore the 'default/'
stuff and use either the global settings or per-interface settings. If
you want to make a machine a router in general, the easiest way is to
set the global sysctl; if you want people to only be able to route
through some of your interfaces, you need the interface-specific
settings.
(And if you want to entirely turn off IP forwarding on a machine in an emergency, the global sysctl is definitely the way to go.)
Note that a forwarded packet can get routed out through any active interface, regardless of the interface's forwarding setting. If you need to restrict what outgoing interfaces forwarded packets can use, you need some form of policy based routing.
(This is one of those entries I write to make sure that I have this information handy the next time I need it, since it is not really well covered in the documentation I could find.)
2008-01-24
Running a 32-bit Firefox on a 64-bit Fedora or Red Hat Enterprise
It turns out that it is pretty easy to run a 32-bit Firefox on a modern 64-bit Fedora or Red Hat Enterprise system, something that is periodically convenient. (For example, the current Sun ILOM client needs the 32-bit version of Java. Maybe you could run things with nspluginwrapper, but I prefer not to take chances with a machine's console.)
First, you need to make sure you have the 32 bit version of Firefox
installed. I don't know if this is the default, but 'yum install
firefox.i386' will certainly make sure it's there.
If you are using a current version of Fedora, all you need to do is
start Firefox as 'linux32 firefox' instead of just 'firefox'. If you
keep the 32 bit version running, the magic of Firefox's remote control means that all of your browsing will
be in it, no matter what command programs use to invoke Firefox.
It takes some more work if you're using Red Hat Enterprise 5; you need
to create a modified version of the firefox script, because the
script hard codes running the 64 bit version if it's available. Put
a copy of /usr/bin/firefox somewhere under a different name, say
firefox32, and edit it to either take out the bit that checks
for /usr/lib64/firefox-.... or change the path it's looking for
something that doesn't exist, so it thinks that the 64 bit version isn't
installed.
One cautionary note: I believe that the personal plugins directory,
$HOME/.mozilla/plugins, is shared between the 32 bit and the 64
bit version. I don't know what happens if you have anything there.
2008-01-23
Linux's umount -f forces IO errors
Here is something that I had to find the moderately hard way:
When used on an NFS filesystem, Linux's
umount -fwill force any outstanding IO operations to fail with an error, whether or not the unmount will succeed.
This can be both good and bad, but on the whole I think it's mostly bad.
If the NFS server has died completely and the outstanding IO can never
succeed, you do want things to abort now instead of hanging around. In
this case umount -f's behavior is what you want, although it doesn't
really go far enough; ideally it would force the filesystem to unmount
no matter what.
But umount -f is also commonly used to try to unmount unused NFS
filesystems when the NFS has crashed temporarily and is being recovered.
(Traditionally a basic umount will hang if it cannot talk to the
NFS server; this may have changed in Linux by now, but if it has the
manpages haven't caught up and so I suspect that the sysadmin habit
hasn't either.)
If the filesystem is actually in use, you want the unmount attempt to quietly fail. Instead what you get is artificial, unnecessary IO errors for IO that would complete in due time. If you are lucky, programs merely complain loudly to users, alarming them, and manage to recover somehow; otherwise, programs malfunction and your users may lose data.
I believe that umount -f is far more often used in the second case
than in the first case, and thus that this causes more problems than
it cures.
2008-01-19
Why the x86 Linux kernel is part of every process's address space
In an earlier entry I mentioned that the Linux kernel takes the top 1 gigabyte of a process's address space for itself. In fact this is not just address space that the kernel reserves for its own use; the entire kernel itself actually lives in the top gigabyte of every process's memory map.
(You can't read it or run it because it is protected address space, accessible only when the system is in kernel mode, except for the VDSO it exports to processes these days.)
The kernel does this because it significantly decreases the overhead of system calls and interrupts. With the full kernel always present in virtual memory, most of what you need to do to start running kernel code is to switch from user mode to kernel mode, which is reasonably fast. If the full kernel was not present in virtual memory, switching into the kernel would require mapping the kernel in by changing page tables and flushing the TLB, and this is a relatively slow operation on x86 machines.
(For some discussion of how slow, see here.)
I suspect that historically this is a little white lie and the kernel being present in everyone's virtual memory was originally implemented just because it's simpler; you don't need to write any code to manipulate page tables when you enter and leave the kernel.
(This is one of the entries I write to get at least some of the details straight in my head.)
2008-01-18
What seems to use power on an Asus Eee PC
Courtesy of having an Eee around and being curious about how to maximize the battery life, here are some measurements about what bits seems to use how much power. All of these measurements are for a '4G Surf' model; your mileage may vary on other ones.
First, a disclaimer: all of these measurements are made with a power meter while the Eee was on wall power. It's possible that the Eee behaves differently when running on battery power (although I would be somewhat surprised).
| Suspended to RAM | 3 watts |
| Minimal powered up state | 10 watts |
| Screen | +1 watt (dimmest) to +3 watts (brightest) |
| Wifi | +1.5 watts |
| 100% CPU usage | +2 watts |
| Ethernet | no extra power usage |
(My power meter only reads to whole watts, and wifi off to wifi on measurements fluctuated between +1 watts and +2 watts, so I am guessing around 1.5 watts for it.)
Or in other words, the Eee draws between 10 watts (if left idle) and 14 watts (screen at full and the wifi active); you can push it to 16 watts if you try hard. This is particularly impressive given my previous numbers; in normal use, the Eee is drawing about half of what one of my Dell LCDs does, never mind the system that they're connected to.
The best non-disruptive thing for power saving is to power down the
screen; you can do this with 'xset dpms force off' in a shell.
Suspending will save a lot more power, but it will also probably wind
up disconnecting you from any ssh sessions and so on, and tapping
a key is faster than resuming from suspend.
(Powering the screen down will also make it less likely to distract you during meetings.)
Also, the Eee draws 2 watts when simply plugged into the wall while powered off. This is not the little AC adapter itself, which the power meter says uses no power when disconnected from the Eee. I now disconnect the Eee from the power cord when it's just powered off; if nothing else, it keeps the AC adapter from warming up.
2008-01-08
Why I feel that a missing Debian package is a bad sign
Because someone quoted me, I might as well explain my earlier quick comment about how I consider the lack of a Debian package for a program to be a fairly serious warning sign, and why I specifically picked Debian for this.
Debian has a huge package selection with a fairly large number of people driving it. If a program is still not packaged for Debian, this means that either that it can't attract even one Debian developer's interest or that the program has serious issues of some sort; immaturity, bugs, license problems, it's very hard to build, and so on. Both cases are bad for a busy sysadmin who doesn't want to be out on the bleeding edge, so unless I absolutely need to use the package I am better off passing.
(The exception is small programs that are trivial to compile myself and don't need to be installed; at that point I might as well just try them out, because it's easy.)
My impression is that this signal is pretty much unique to Debian; smaller distributions, Fedora included, don't seem to have the breadth of packages or the depth of packagers. Also, my impression of the Debian culture is that it encourages packaging everything that moves and some things that don't.
(Even if no Debian packager is personally interested in a package, my impression is that Debian users can usually find a developer who will respond to their interest.)