2007-01-15
Configuring VLANs on Fedora Core
Interactive VLAN configuration is done with the vconfig program. The
basic usage is 'vconfig add eth0 6'; this makes a new Ethernet device
called eth0.6 (by default; vconfig can change this, but you probably
don't want to). 'vconfig rem eth0.6' will then remove the VLAN.
A configured VLAN is up enough so that you can receive traffic on it. If
all you're interested in is doing things like bridging virtual machines
onto the VLAN's network, you don't need to do anything more at the host
level; otherwise, you're going to need to give the VLAN interface an
IP address somehow. I don't recommend using DHCP, because as far as I
know there's no way to tell the Fedora DHCP clients to not helpfully
rewrite your /etc/resolv.conf for the new network.
(Really what one wants is a 'shut up and get me an IP address, JUST an IP address, no routes, no nothing' option for some DHCP client. But this is kind of an obscure thing, so I can understand why it's not there.)
For permanent configuration, you can create ifcfg scripts in
/etc/sysconfig/network-scripts. The minimum contents are:
DEVICE=eth0.6 VLAN=yes ONBOOT=yes
(You can say 'ONBOOT=no' if you really want to; I suppose 'ifup
whatever' is marginally less typing than doing the vconfig by
hand.)
The 'VLAN=yes' bit is the important magic. With this, Fedora cracks
open the device name to conclude that this is VLAN ID 6 on eth0, and
sets it up appropriately (yet another reason not to try to change
vconfig's VLAN name format). Fedora is perfectly willing to bring up
VLANs that have no assigned IP address, and this is how I have mine set
up. I name my VLAN ifcfg files things like 'ifcfg-vlan6', but I believe
this name format is not required.
There's an alternate format for the VLAN ID and base device information:
DEVICE=vlan6 PHYSDEV=eth0 VLAN=yes ONBOOT=yes
(For VLAN ID 6 on eth0 again.)
As far as I can see, you still get a device called 'eth0.6' out of
this, not one called 'vlan6'.
Fedora's tcpdump understands VLANs and so can be used to dump the
traffic on eth0 so you can see what VLANs are actually reaching your
machine. However, just to confuse you, it will not print the VLAN
ID information unless you ask it for link-level headers with -e.
(Although it will happily receive and dump the packets, which can be
really confusing; you need to remember to ask for 'not vlan and ...'
if you want to see just the untagged base traffic on your link.)
Because VLAN devices are regular Ethernet devices, you can use
tcpdump on them to see just traffic for that particular VLAN.
This traffic is naturally already detagged.
(This is the kind of entry I write so that I have all of this information in one place the next time I need it.)
2007-01-11
Xen versus VMware for me
I've recently spent some time thinking about virtualization technology, in particular how I feel about Xen versus VMware. In theory the choice should be a slamdunk for me, because VMware requires evil binary kernel modules and I have usually avoided that sort of thing even when it was somewhat painful (eg, I don't use ATI's binary X drivers). In practice, though, I am only distantly lukewarm on Xen and much more interested in VMware.
The simple summary why this is is that Xen is a commitment, but VMware is a flirtation.
I call VMware a flirtation because it will run on an otherwise ordinary system (provided that I have enough kernel source around to compile its modules). Installing and using VMware is thus a reasonably casual thing, something I can do without particularly perturbing the system. I can even normally boot my system with VMware's kernel modules loaded, and only contaminate things if I actually need it this time around.
Xen's hypervisor approach requires a much bigger change in my system; I can't add Xen to a running system, I have to boot the Xen environment from the start. Which requires a special Xen-enabled kernel, which requires special patches that you have to integrate into your kernel, which gives me nervous flashbacks. Being able to run stock kernels is valuable for me, and with Xen I give that up, so setting things up for Xen means a fairly significant commitment.
(If Xen ran on top of a stock kernel.org kernel, I would be much more interested, because it would be much less of a commitment.)
There are also additional reasons for my tilt towards VMware. First, VMware has a more polished environment right now, with a GUI and various convenient bits and all. Second, VMware's approach is much more useful for what I do with virtualization than Xen's is, because VMware is oriented towards running unmodified guest OSes in a complete virtual PC environment.
The Xen hypervisor approach is I think clearly the right way of doing things like running services in isolated virtual machines. But I'm not doing that; instead I'm using virtualization for testing, where I really want to run the exact same kernel and environment that I will wind up running on the real hardware. (And thus it is a definite feature that VMware emulates a complete PC, hokey BIOS booting and all.)
2007-01-06
Fixing up .rpmnew files
For some reason, rpm in Fedora Core 6 (and I believe Fedora Core 5 too)
has become overly twitchy about replacing configuration files; a lot of
the time it will write .rpmnew files instead of just replacing the old
version of the file even if the old file is unchanged and the new and
the old file are completely identical.
While I think this is multiarch stuff in action, I still have to clean it up every so often. I use a little scriptlet for this, usually typed at the command line:
for i in *.rpmnew; do
n=$(basename $i .rpmnew)
if cmp -s $i $n; then
mv -f $i $n
else
diff -u $n $i
fi
done
(Yes, this can be done on one line if you're crazy enough. I admit that
I usually turn the if block into either 'diff -u $n $i' or, once
that's shown me nothing important, 'cmp -s $i $n && mv -f $i $n'.
And if it was an actual script, I would have to start making it more
elaborate, with features like a dryrun option.)
At first I thought I had an idea why this was happening, but the more I
look at the rpm source code and the RPMs involved in my most recent
case, the more confused I get. As far as I can decode from the rpm
source code, this can't happen unless either the actual file on disk
has been changed (which it hasn't), or there's an existing config file
before the package is installed for the first time (not applicable for
package upgrades). It does seem to only involve things that have RPMs
for more than one architecture installed, though, although I haven't
tested that extensively.
2007-01-04
The scars of my NPTL experience
It's only recently that I've realized how jumpy about certain things my NPTL (the 'native Posix thread library' for Linux) experience still makes me.
NPTL was the new, improved Linux threading library, supplanting the old and less efficient 'pthreads' library. To get its good performance it needed some kernel support, kernel support that was only added in the Linux 2.5 development kernels (and then in Linux 2.6 when 2.5 turned into 2.6).
However, Red Hat was a big NPTL booster and they wanted to use NPTL well before 2.6 was ready and usable. So, starting with Red Hat 8, they hacked NPTL support into the version of the 2.4 kernel that they shipped. Since the system basically required NPTL, this meant that you couldn't really use anything except Red Hat's kernels.
At the time, we wanted (and sometimes needed) to use stock kernels on our servers. So, no Red Hat 8 for us.
This might have been OK if Red Hat had only done it for one release. However, the 2.6 kernel wasn't ready for use until Fedora Core 2; RH 8, Red Hat 9, and Fedora Core 1 were all NPTL-on-2.4 releases. The upshot was that pretty much all of our servers had to stay all the way back at Red Hat 7.3 for quite a while, which really was no fun.
Ever since then I have been perhaps irrationally twitchy about being forced to depend on a specific vendor's kernel hacks, however useful they may be and however little I may really need to run my own kernels on servers any more.