You really want to put your switches in server racks

September 9, 2011

Once upon a time, not that long ago, when we were perhaps smaller and switches were certainly more expensive, we put our switches in network racks over on one side of the machine room, all of our servers in server racks, and ran cables under our machine room's raised floor from the servers to the switches. Please learn from our painful experience and don't do that; put almost all of your switches in server racks.

Yes, really. Even if this requires putting a stack of switches in a rack to get enough ports (or enough subnets, if you sensibly use one switch per subnet). Even if you need to put one switch in the front of the rack and another in the rear, just to get enough in (switches are shallow, you can usually pull this trick off).

Why you want to do this is simple. The more network cables you run under the floor, the more you discover the charms of machine room archaeology and the more time you will spend trying to trace and pull old cables when you remove old machines. (Unless you don't have the time to pull 'harmless' unused cables, or you're going to get around to it on some slow day, or you're leaving the old cable in place for now because you're pretty sure you're going to put a new machine in the rack in a bit and you'll just be re-running a cable so let's save some work. Then it gets worse.)

Putting as many switches as necessary in your racks means that you'll run roughly one network cable per switch back to your core switch interconnect points, instead of one or more cables per server. This is a lot fewer cables under the floor (or overhead if you use overhead cable trays, and they get messy too), and that is a very good thing. It also makes it a lot easier to remove and add cables as you remove and re-add servers, which usually drastically increases the chances that you'll actually do it.

Four years ago when I wrote RackNetworking, we had just begun to think about moving from our old way to putting a bunch of switches in server racks. Since then we've almost entirely moved to the server rack approach, but we still have a number of machines that were cabled up with the old under the floor approach; every time I have to clean up after one of those machines (as I had to today), I'm reminded of how much better the new approach is.

Sidebar: our answer for uplink bandwidth

One of my concerns back in RackNetworking was uplink bandwidth from the server rack switches to the core interconnect. In practice this has not been an issue for us, because most of our machines are not heavy bandwidth consumers. We continue to run direct connections to the core interconnect switches for the few machines where we think it may actually matter; I wrote the details up in my writeup on how our network is implemented.


Comments on this page:

From 174.93.25.110 at 2011-09-09 19:51:57:

Another trip that's often useful is to put the switch/es in the centre of the rack.

If you put it at the top (which is traditional), then you need some six foot patch cables, some four foot, some two, etc. You then have a large bundle of cables running from the bottom of the rack to the top, blocking airflow.

If the networking is in the middle of the rack, you need fewer variations of lengths. You also have some of the cables come from the top and some from the bottom, so the bundle is half as thick, thus blocking less of the airflow.

For multiple switches, it may also be good to examine devices that are stackable, so that multiple units act as one. This way you can take two NICs from a server, plug them into two physical switches that act as one virtual switch, and then set up LACP. If you have to do maintenance on the network (or have a hardware issue), you can do it without bringing down the connectivity to the servers in question.

By cks at 2011-09-14 02:22:08:

I agree that putting switches in the middle of the rack has benefits. Unfortunately we tend to regard the middle of the rack as prime real estate (because it's conveniently accessible) and reserve it for things we consider more important.

For our use of multiple switches in one rack, stackable switches wouldn't help us. In almost all of our racks the reason we have multiple switches is that each switch serves a different network and we don't want to change that, partly because it means we can use generic configurations on all of these switches.

Written on 09 September 2011.
« Things that could happen to your archives
The weakness of the certificate authority model, illustrated »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Fri Sep 9 14:57:23 2011
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.