2011-12-26
Labs versus offices for sysadmins (or at least us)
On the one hand, lab areas are great because they mean that noisy machines aren't in your office. On the other hand, lab areas are bad because they aren't your office, including that they're noisy, often uncomfortable, don't have your system setup, your phone, and so on. Really what you want is machines in the lab that you can fully control from your office with at most occasional in-person visits; sadly we rarely get that.
This means that there's a constant tension between putting test machines in the lab area and putting them in your office. At least around here, what tends to happen is that relatively quiet hardware winds up in people's offices for testing rather than being dropped in one of our lab areas; the annoyance of having the hardware in your office is less than the annoyance of having them not in your office. In turn, this drives our desire for lots of drops (per earlier entries), and when we don't have lots of drops people wind up running network cables between offices because it's still more convenient than trying to rig something up in a lab area.
(Individual tolerances for noise vary; my co-workers are far more tolerant than I am and so they have a lot more stuff in their offices than I do in mine. Also possibly this means our lab areas aren't set up well, which is possible; they have no more network drops than our offices do, since the entire building was wired up a long time ago.)
Now, I've kind of given an incomplete view of what we do with hardware. We don't really have a good lab area that's isolated enough for actively loud hardware, like your garden variety really noisy 1U servers, so they wind up getting shoved into a machine room if they're going to hang around for long. What we mostly wind up using either in offices or in the lab area is things like switches or various desktop machines we use to build test servers and test networks; for example, if we want to build a duplicate of our Samba and CUPS environment we don't do it on actually 1U servers, we just grab a couple of spare desktops and start installing. They're not as powerful as the real thing and they're not quite identical to it, but we can put the same software on them and they're a lot more convenient (quieter, less demanding of power, easier to find space for, etc).
(Some people use virtualization for this. Locally, I'm the only really active user of this approach; my co-sysadmins mostly prefer using real hardware.)
More wiring for sysadmins: sysadmins and gigabit networking
In reaction to comments on his entry The Other Way, Matt Palmer wrote in part about my concerns about office switch uplink bandwidth for sysadmin drops (in WiringForSysadminsII):
[...] What I question is the need for constant, sustained gigabit over an extended period to another isolated machine such that you need a dedicated link to them.
I sort of half-agree that sysadmin machines and drops don't need constant, sustained gigabit bandwidth (although I'm not entirely sure about that). But what they do need is occasional periods of real gigabit bandwidth, and real gigabit bandwidth when you can be absolutely sure that the underlying link will deliver gigabit data rates and the only performance limits are those created by the machines, switches, and so on at either end.
If I'm testing how fast hardware and software can go or if I'm trying
to investigate network performance problems that have been reported
to us, I need an environment that is not artificially contaminated by
other networking traffic coming in to my heavily VLAN'd office switch. I
know that some amount of contamination is there (I have tcpdump
s of
our internal networks and some of them are remarkably noisy); it may
be enough to be significant, or it may not be. I don't want to have to
guess about it and make assumptions. I want a clean gigabit, one that's
as close as possible to what machines and users would see in the real
environment in our machine rooms or in user offices.