Wandering Thoughts archives

2007-02-27

Things I did not know about until recently: Ethernet splitters

Real cat-5 wiring, such as what they run through the walls of buildings, has eight wires; 100 Mbps and 10 Mbps Ethernet use four wires. If you have fixed cat-5 runs that you can't afford to add to, and you really need additional ports in places that already have one, and you can't afford to buy switches, you can take advantage of this to double up two Ethernet ports onto a single cat-5 run.

That's an Ethernet splitter. Ours look like slightly jumped up phone line splitters in industrial beige and are quite easily overlooked until the still innocent new person around the office wonders how two network cables seem to be coming out of one jack.

I was rather surprised; that you could do such perversities to cat-5 Ethernet had never occurred to me.

(When I saw the patch panel that is the other end was when the true horror started dawning, because of course you need one at each end. We are way, way over the wiring density our patch panels were designed for. (I have pictures, but I will spare my readers.))

Our one salvation may be that gigabit Ethernet needs all 8 wires, so if people want to move to gigabit they will have to be prepared to fund switches (or running more cat-5 wires in the walls). Unfortunately, I suspect that a lot of people around here will decide that gigabit is not really that important (for grad students) when they see the true costs.

(Ethernet splitters are not the same thing as Power over Ethernet splitters; the latter are an honorable and useful thing. Sometimes you can even run PoE over Ethernet splitters, resulting in the two sorts of splitters in sequence.)

HardwareDiscovery written at 22:51:23; Add Comment

2007-02-19

What I currently know about Fibrechannel versus iSCSI versus AoE

We're planning a significant capacity increase for our local SAN storage pool, which means that we've been trying to figure out which SAN technology we want to go with. We don't have high performance IO needs, so we're going for bulk storage: large SATA disks in RAID 5 in some sort of SAN RAID controller. We plan to use Solaris 10 on x86 for our NFS servers that will use the SAN, with DiskSuite for failover. DiskSuite has to own full disks if it's going to do failover, not just partitions, so we need our SAN RAID controllers to export logical LUNs.

(The local opinion is that we trust Solaris more than Linux as a NFS server, plus it has DiskSuite.)

There are three possible choices: Fibrechannel, iSCSI, and ATA-over-Ethernet. Of these:

  • iSCSI and AoE can be had for about $5k for a 15-bay 3U RAID controller, in the form of the Promise VTrak M500i or the Coraid SR1521. You buy commodity SATA disks yourself from your cheap source of choice.
    (There are probably other vendors for 15-bay 3U iSCSI controllers; we haven't looked very hard.)
  • FC costs about $5.5k and up for a 12-bay 2U RAID controller. You have to buy the disks through the controller vendor, at a not insignificant markup.
  • as a result, FC costs about twice as much per terabyte as iSCSI or AoE.
  • people on campus have positive experience with all three options, and none of them have blown up yet.

  • AoE and iSCSI cost much less than Fibrechannel to add stuff to later, because Fibrechannel switches are really expensive and thus a) you tend not to have many spare switch ports and b) getting more switch ports is expensive.
  • similarly, it is much cheaper to have redundancy and spares for your AoE or iSCSI switching fabric.
  • for at least AoE, you want switches and machines that can do jumbo frames. This probably won't hurt for iSCSI either.

  • there is only one vendor of AoE RAID controllers, Coraid. Coraid's stuff currently does not do logical LUNs within a single disk array.
  • while Promise's stuff does do logical LUNs, it has some limits on how many you get within a single disk array. Fortunately we seem unlikely to run into them.
  • Coraid's management and monitoring software seems to be less advanced than Promise's, which will do things like mail you problem reports.

  • AoE has a much simpler specification than iSCSI, but this is somewhat misleading because the AoE spec doesn't say what ATA commands you must support in order to talk to common AoE implementations, and thus doesn't include a spec for them; in practice the AoE spec has to be considered to include some of the ATA spec itself.

  • Linux has both AoE and iSCSI drivers in the standard kernel.
  • Solaris 10 has standard iSCSI drivers, but no standard AoE ones. Coraid is sponsoring the development of an open source AoE driver, but it's currently only tested on SPARC systems, not x86 systems, and may not yet fully support ZFS (apparently ZFS needs the disk driver to support some new operations). However, it supports Solaris 7, 8, and 9 in addition to Solaris 10.
  • the Linux AoE driver is about 2,000 lines; the iSCSI driver is about 4,000 5,800 lines between the actual driver and the iSCSI library. The AoE driver is a straight block driver, the iSCSI driver is a SCSI driver.

  • there is a non-integrated AoE target driver for Linux called vblade. No one else has AoE target drivers. (Target drivers allow you to use a machine as a SAN RAID controller that exports storage to other machines.)
  • there is an integrated iSCSI target driver for Solaris 10, although it is not yet in official releases.
  • there are a number of Linux iSCSI target drivers; none are integrated (yet).

  • in general, iSCSI is more mature and widely supported than AoE.
  • Linux seems to have better support for AoE than for iSCSI, which is probably because AoE is simpler and has less peculiar bits. (There is a certain enterprisey smell about iSCSI.)

Since we are not interested in building our own SAN RAID controllers, we are almost certainly going to wind up with iSCSI; AoE is unsuitable on several grounds, and Fibrechannel costs too much for what we get. If we were building our own SAN RAID controllers out of PCs running in target mode, I would be very tempted by AoE because of the simplicity of all of the bits involved (and building our own would overcome several of the things that make AoE unsuitable).

FCvsiSCSIvsAOE written at 22:33:40; Add Comment

2007-02-12

Why thin clients are doomed (part 2)

In an earlier entry I gave short and long term reasons why I think that thin clients were doomed. Now it's time for the high-level third reason (which occurred to me only after I'd written ThinClientDoom, or it might have been in there too).

So far, people have always found productive uses for new computing capabilities and more computing power; time and time again, silly capabilities have turned out to have important uses, to the point where they've become ubiquitous. Betting on thin clients amounts ultimately to betting either that this has stopped or that the difference in capabilities and productivity is not big enough to be important.

Both bets seem rather dubious. History is strongly against the former and the latter is at least a dangerous assumption, especially given our history of underestimating the usefulness of things (and how bad we seem to be at straightforward cost/benefit analysis when it comes to computers).

I think this also explains why thin clients are so seductive. At any given time, computers usually have some capabilities that we're not using very well; looking just at that moment in time, it's awfully tempting to say 'we'll never need that', and go for the machines without the capability.

(And the problem with going with thin clients, even for a nominal short term, is that while hardware can turn over relatively rapidly, infrastructure design does not. You could probably build an infrastructure where you can flipflop between thin clients and dataless clients, but I don't think very many people do, especially if they start with just one or the other.)

ThinClientDoomII written at 21:10:04; Add Comment

2007-02-07

Why I think thin clients are doomed

Right now, thin clients are doomed for the same reason that they've been doomed before: users want too much. I believe that watching YouTube videos and plugging in their USB keys are pretty much the minimum level of features that users will expect and accept; if they can't do both, they're operating in a fundamentally crippled computing environment, which is not the way to make them happy. And unhappy users sooner or later push back.

(YouTube is not merely desirable by itself, it's also a convenient proxy for other useful things. If you can't do YouTube, you probably also can't do video conferencing, watching training videos at your computer, or probably even VoIP, which wants low-latency audio.)

But that's just the short term doom, and a lot of it can be overcome with enough work. Unfortunately, that illustrates the long term doom for thin clients: making everything work in a thin client environment always takes extra engineering work and extra time. Desktop computing has reached a point where thin clients are doomed to a perpetual second class citizenship, and second class citizens have never done very well.

It's popular to argue that all many people need to do their job is access to a small number of applications and it's much cheaper to provide this through thin clients. I don't think this is going to succeed, because 'it's more cost effective to provide you with a crippled environment' is not something that resonates with most people, which means that they're going to try to escape as soon as they can.

(Besides, history is against this argument, since it is just a rerun of the mainframe and dumb terminal arguments from the 1980s and we know how those came out in the end.)

Sidebar: thin clients versus dataless clients

The distinction I draw between thin clients and dataless clients is that thin clients do the computing elsewhere while dataless clients do local computing but don't have important local data. While thin clients are doomed, I think that dataless clients have an increasingly bright future.

ThinClientDoom written at 21:36:24; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.