Disk space in the modern world

December 23, 2011

A while back I wrote about someone looking for long-term archives of 10 to 20 Tbytes of data with the conclusion being that you shouldn't try to build archives, just a live fileserver. As it happens, I have a theory about why the question was asked in the first place; I think that many of our sysadmin instincts about disk space are miscalibrated for the modern world.

To put it simply, I suspect that a lot of sysadmins come from a time when 10 or 20 terabytes was a heart stopping big amount of disk space. If you needed multiple terabytes of space, you needed a big solution, something that would take up a bunch of space, cost a lot of money, and called for a bunch of careful planning to design and spec out. 20 Tb of disk space wasn't something you could put together casually; it was big iron, at least by the standards of non-enterprise setups. In short, 10 or 20 terabytes were a big deal.

That's no longer the case. In the modern world, 20 terabytes is no longer big iron (although it's still not trivial). It's perfectly sensible and only a little bit expensive to put together an environment with that much disk space, and it no longer needs extensive planning. Our instincts will adjust to this new reality in time, but in the mean time I sometimes have to remind myself that terabytes of disk space aren't a big deal any more.

(This is only a pretty recent development, one created by affordable terabyte-plus hard drives and the general adequacy of SATA drives. Of course, hard drive space isn't the only thing that this is happening to; we've been having a similar effect with RAM for while.)

Written on 23 December 2011.
« Python 3 from the perspective of someone with existing Python code
Wiring offices for sysadmins »

Page tools: View Source, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Fri Dec 23 00:52:47 2011
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.