Wandering Thoughts archives

2006-09-29

Gnome daemons you'll want to run in a custom environment

As a followup to an an earlier entry, here's what I've worked out about how to use the standard Gnome automount and other stuff in your own custom environment. All of this is based on Fedora Core 5, but I think it's probably generic to any modern Gnome-based system.

The Gnome automount stuff is done by gnome-volume-manager, which runs as the user and communicates with the system HAL daemon to do all the actual work. This just mounts things when they're recognized; to unmount them, you need to use gnome-umount or gnome-eject. To tell them what to work on, use '-d <device>' or the '-p' option, which takes a variety of forms; to quote the --help text:

Mount by one of device's nicknames: mountpoint, label, with or without directory prefix

To remount a device after you have unmounted it and fiddled with it, use 'gnome-mount -d <device>'.

The behavior of the volume manager is configured with the gnome-volume-properties program, which you can run without being in Gnome. By default, the volume manager will pop up a Nautilus window browsing the newly inserted volume; you probably want to turn this off. (I also turn off auto-playing newly inserted audio CDs and video DVDs.)

The other Gnome daemon that I found I really wanted to run in my custom environment is esd, the Gnome sound daemon. Otherwise Flash stuff in my browser often failed to have audio (although some will consider this a feature).

GnomeVolumeManagement written at 10:56:00; Add Comment

2006-09-25

Some reactions to a dual monitor X setup

I've recently gotten an Xinerama-based dual monitor setup working on my new office workstation, and the experience has left me with a bunch of reactions.

  • it was pleasingly easy to get going in Fedora Core 5, and by that I mean that X automatically comes up with both monitors active if they're both connected.
  • the default clone mode (both monitors displaying the same thing) is surprisingly disorienting.
  • starting up dual monitors with an unmodified X setup is an interesting way to see which windows you've positioned absolutely and which you positioned relative to the right edge, as your normal layout suddenly sprays itself across both monitors. (Mine was surprisingly random.)

  • X could really use a geometry specification extension that is Xinerama-aware. (Of course, all the modern kids are probably not using the -geometry switch or equivalent; I have no idea if modern Gnome or KDE apps even support it any more.)

  • similarly, it would be nice if Xinerama had a generic 'clone this window on all displays' feature, because there are some things that I really want on both displays, and there is no guarantee that I can start two copies of a given application without big explosions. (Wanting the windows to be able to respond normally to events makes this hard to do with an outside program, and I suppose would complicate the job even inside the X server, since it brings up issues like 'what mouse position should be reported to the program?'.)

  • I need a command line utility to report the current mouse position, so various of my widgets can at least pop up their windows on the right monitor.
  • X needs more little utilities that you can use as the building blocks in shell script based applications and widgets.
  • the pager display now features a lot of skinny windows, since its display's width to height ratios are really out of whack now. (There's nothing much I can do about this unless I'm willing to give it more horizontal room, and I'm not.)

  • the question of what I'll do with all this desktop space has a way of answering itself in short order.

Overall, it's certainly been interesting. (It also got me to upgrade from a vintage 2001 version of fvwm to something more modern, which likely has its own advantages once I bother to look up the new features. (My old version of fvwm had some bugs with Xinerama support.))

DualMonitorNotes written at 19:06:22; Add Comment

2006-09-22

A NFS mount accident on Linux

Something that you can do on a modern Linux system by accident:

mount -t nfs -o hard,intr,rw localhost:/ /

(instead of mounting it on /mnt, as I had intended.)

It is surprisingly hard to recover from this. In fact, I don't think I succeeded, and I wound up having to reboot. At the same time, I don't think anything broke, so I theoretically could have kept on running the machine like that.

(Why you might want such a loopback mount is covered here.)

On Linux specifically, another way of achieving the same goal is to use a bind mount:

mount --bind / /mnt

This doesn't need NFS daemons, but I'm an old dog and sometimes we automatically reach for our old tricks, no matter how complicated they are.

Things I tried that don't work, so you can skip them

umount localhost:/
Did nothing.
umount / and umount -t nfs /
Either did nothing or complained that / was busy.
umount -l -t nfs /
Unmounted everything except for the rootfs mount of /, which leaves the system pretty much unrecoverable afterwards since udev's /dev is gone, so you don't have any devices to mount stuff from.

From this it looks like lazy unmounts detach subtrees of the mount that you're unmounting, as well as the mount itself, which I suppose is not too surprising.

Sidebar: how to keep access to /proc/mounts

When /proc becomes inaccessible, how do you find out what is and isn't mounted?

It's a fairly core principle of Linux that while you may get detached from the filesystem tree, your current directory doesn't actually go away. So I did:

; cd /proc
; python
>>> import os, sys
>>> def cat(fn):
...   fp = open(fn, "r")
...   fd = fp.read()
...   sys.stdout.write(fd)
>>> cat("mounts")

(The Python transcript has been slightly simplified.)

Using Python and importing stuff ahead of time meant that I wasn't counting on things like /bin/cat remaining accessible; everything I needed to monitor /proc/mounts was live in a running process. This turned out to be a good thing.

NFSMountAccident written at 17:31:34; Add Comment

2006-09-19

One of the reasons I dislike SELinux

I have a fixed personal opinion that systems should not spew kernel messages in the course of normal system operation, and especially not over the console. As I am busy finding out, yet again, SELinux fails this test in at least some circumstances on Fedora Core 5.

(Possibly the circumstances were odd, since my FC5 install hadn't managed to run the 'firstboot' stuff due to the un-upgraded X server crashing on this machine's hardware. Still, I brought it up in a normal runlevel 3 multiuser boot and started getting the spew when I did a 'yum update xorg-*'.)

Software should be silent in general logs unless either something is wrong or I have specifically asked for the information to be logged. I have not asked SELinux to natter at me, and if the stock SELinux configuration on a stock Fedora Core 5 machine has something wrong, I don't exactly want to be running it.

The idea that SELinux should log this stuff to kernel logs 'just in case' doesn't scale. SELinux is not the only kernel subsystem that might to log things just in case, and if everyone does it the kernel log buffer would probably roll over in about sixty seconds flat.

The right way to log this sort of just in case information is to put it in some special place, just for it, so that no one has to pay attention if they don't care. And make sure the log gets rolled, too.

SELinuxDislike written at 19:38:10; Add Comment

2006-09-18

Why /var/log/btmp may be using up a lot of space in your /var

When I was looking around the /var on my Fedora Core 5 scratch machine to see where all the disk space was being used as part of the last entry, I was startled to discover that /var/log/btmp was a 100M file (and by far the largest thing in /var/log). This was a surprise to me, because I had never heard of the file before.

It turns out that btmp is used to record bad logins (some of you are already wincing), just like /var/log/wtmp records good ones. My scratch machine is on the Internet, with an unscreened SSH daemon, and thus just like everyone else sees a constant flux of brute force ssh login attempts. Nothing seems to age /var/log/btmp, so it has been busily accumulating a pile of entries every day since the machine was first brought up on April 28th.

(If you are curious, the lastb command will read and dump the file. Or you can just use 'last -f /var/log/btmp'. You'll want to pipe it through the pager of your choice.)

Somewhat to my displeasure, btmp records even login attempts to nonexistent user names. Logging nonexistent usernames is a moderate security exposure, because people do occasionally accidentally enter their password as their username; if you log unknown user names, you're sooner or later going to have a plaintext log of someone's password.

Removing /var/log/btmp will apparently shut the whole thing down. In this day and age, I suspect that there's no particular point in logging bad logins on any machine on the Internet, unless you are interested in generating some statistics; the noise is likely to overwhelm any possible signal.

VarLogBtmp written at 14:49:15; Add Comment

My current view of Linux system filesystem sizes

Here's my current thoughts on how big system filesystems (or partitions, depending on what you like to call them) should be for new systems. This assumes that you have lots of disk space to play around with.

Also, note that I run non stripped down Fedora Core systems; in fact, I have a tropism towards installing most everything in sight, just so I have its documentation handy in case I need to poke at it. A stripped down system would fit in much, much less.

One of my big principles of system partitions is that I want them to be big enough that they won't run out of space during the inevitable operating system upgrades over the next five year to ten years. Painful, bitter experience has taught me that distributions only get bigger, sometimes lots bigger; given today's very big disks, a large safety margin is very cheap insurance.

/ 5G The big space eater here is /lib/modules; a current Fedora Core kernel config is about 100M of modules, and that's only going to keep on growing. Add in Xen dom0 kernels, my own kernels, etc etc and it adds up fast. (I ran out of space on a 1G /, to my surprise.)
/boot 512M This probably only really needs to be 100M or so, but I am nervous about sudden space expansions and future versions of Fedora Core deciding on random (but large) minimum space requirements here.
/usr 20G Lack of space in /usr has been the most frequent problem during distribution upgrades, so I want to be really, really sure that I don't run into this again.
/var 5G This is either vast overkill or not enough, depending on what you are doing in /var. At least this way, I have room for a few experiments before I have to find things like mach new homes.
swap 2G This much is probably overkill for my machines, but insurance is cheap. It also insures against random (but large) minimum swap space requirements in future versions of the Fedora Core installer, which have happened before.

For scale, current disk usage on a more or less stock Fedora Core 5 AMD 64 machine, with a lot of things installed, is about:

/ 798M
/boot 26M
/usr 5.6G
/var 1.5G

The /var includes 642M of /var/lib/mach, which has a relatively complete 32-bit Fedora Core 5 development environment plus some extra bits, and 335M of /var/cache/mach, which is presumably related to this.

(On the other hand, this mach install neatly demonstrates that you can get Fedora Core 5 into much less space than I usually give it.)

SystemFilesystemSizes written at 14:36:30; Add Comment

2006-09-15

The temptation of LVM

Generally I'm not a big fan of LVM. It has a lot of moving parts, adds an extra layer of indirection, and the whole set of commands and tasks has always felt rather complex; there was too much fan dancing for the results I'd get.

But I have a new machine to set up, and I have to say that modern disks have really made LVM tempting. The problem is that they've gotten so big that I no longer have any idea what I want to do with all of the space.

(This is not quite true: I know what system partitions I want and roughly how big I want them. But that's a drop in the bucket on a modern disk; even with generous sizes to fend off distribution bloat, I'm not going to use more than 50G.)

One approach, which we used on another system, is to throw all of the space into one big /data partition, then do bind mounts to create the user-visible 'filesystems'. This has the virtue of not limiting how much space you can use for anything in particular, but the drawback that if your filesystem gets corrupted you lose big.

The temptation of LVM is that with LVM, I can defer the issue and change my mind as much as I want. I can carve out some starting filesystems with relatively modest sizes and leave the rest of the space unallocated; as I get a better idea of how I'm going to use the system, I can grow filesystems or make new ones. And if my usage patterns change later, I can shrink things or drop surplus filesystems entirely.

This would give me the benefits of a big common storage pool, without having to have one huge filesystem. All I'd have to do is accept into my life the overhead of dealing with LVM and whatever performance hit it and non-contiguous filesystems have.

It's a very strong temptation. But it makes me feel guilty; I feel that by all rights I should be able to plan my filesystems out ahead of time, and resorting to LVM would just be the lazy, wimpy way out.

LVMTemptation written at 22:31:30; Add Comment

2006-09-12

A quick note about how extended partitions work

Hard disk partitioning on PCs is one of those confusing areas, full of peculiar features and apparently random limitations, especially the whole primary partitions versus extended partitions thing.

Fortunately, it's actually fairly simple:

  • primary partitions are just a fixed size array in the first sector (the master boot record, among other things).
  • extended partitions are a chain (a linked list).

Each extended partition starts with a sector (the 'extended master boot record') that describes how big it is and then points to the next extended partition. The whole collection forms a singly-linked list.

One important consequence of this is that changing extended partitions around does destroy some sectors in your current extended partitions, because they get overwritten to set up the new chain. By contrast, changing primary partitions doesn't touch anything besides the MBR.

(Removing extended partitions is not destructive, though, since it doesn't change any sectors that weren't already being used as part of the chain.)

Also, apparently there is no requirement that the chain be ordered by increasing location; you can have a set of extended partitions that zoom around the disk like lost bumblebees. (I admit that I am not quite sure how this works, since the format the Wikipedia page describes seems to forbid that unless you get really creative. But evidently it works, which opens up a pile of other creative tricks.)

Like a lot of things about the PC, primary partitions and extended partitions are quick fixes piled on top of quick fixes, always preserving backwards compatibility. When four partitions turned out to not be enough, the hard disk partitioning scheme wasn't replaced entirely, just augmented.

(And in isolation, the extended partition scheme is not bad; it does have the virtue that it has no arbitrary limits on how many extended partitions you can have. It also needs no BIOS upgrades.)

ExtendedPartitions written at 23:43:46; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.