Wandering Thoughts archives

2012-02-29

SSDs and understanding your bottlenecks

In a comment on my entry on five years of PC changes, it was suggested that I should have used a SSD for the system disk. I kind of addressed this in the original entry on my new machine's hardware specifications, but I want to use this to talk about understanding where your bottlenecks are (and aren't).

The simple way of talking about the benefits of SSDs is to say that they accelerate both reads and writes, especially synchronous writes and random IO in general (because SSDs have no seek delays). But phrasing it this way is misleading. What SSDs actually accelerate is real disk IO, which is not the same as either OS-level reads and writes or what you think might produce disk IO. This is fundamentally because every modern system and environment tries to keep as much in memory as possible, because everyone is very aware that disks are really, really slow.

(Even SSDs are slow when compared to RAM.)

Thus when you propose accelerating any disk with a SSD, there are two questions to ask: how much do you use the disk in general and how much actual disk IO is happening. There's also a meta-question, which is how much of this IO is actually causing visible delays; it's quite possible for slow IO to effectively be happening in the background, mostly invisible to you.

Although I haven't measured this, my belief is that system disks on Unix machines are in many ways a worst case for SSDs. I tend to think that my desktop environment is relatively typical: I normally use only a few programs, many of them are started once and then stay running, and I often already have an already executing instance of many of the programs I re-run (for example, xterms and shells; a Unix system is basically guaranteed to always have several instances of /bin/sh already running). All of these act to limit the amount of OS-level reading and writing being done and increase the effectiveness of OS memory reuse and caching. Even on a server, this pattern is likely to be typical; you need an unusual environment to be really using lots of programs from /usr/bin and libraries from /usr/lib and so on, and doing so more or less at random.

(Also, note that much system disk IO is likely to be sequential IO instead of random IO. Loading programs and reading data files is mostly sequential, for example.)

Given this usage pattern, the operating system almost certainly doesn't need to cache all that much in order to reduce IO on the system disk to almost nothing. How likely it is to be able to do that depends on how much memory the system has and what you're doing with it. Here the details of my hardware matter, specifically that I have 16 GB of RAM and don't run all that much that uses it. Ergo, it is all but certain that my OS will be able to keep enough in memory to reduce system disk IO to almost nothing in normal use. If the system disk is barely being used most of the time, making it an SSD isn't going to do very much most of the time; there just isn't anything there for the SSD to accelerate.

Now, here's something important: saying that an SSD wouldn't make a different most of the time isn't the same thing as saying an SSD would never make a difference. Clearly an SSD would make a difference some of the time, because my system does sometimes do IO to the system disk. Sometimes it does a fair bit of IO, for example when the system boots and I start up my desktop environment. If you gave me an SSD for free, or if 250 GB SSDs were down in the same $50 price range that basic SATA disks currently are, I would use them. But they aren't, not right now, and so my view is that SSDs for system disks are not currently worth it in at least my environment.

(I also feel that they're probably not that compelling for server system disks for the same reasons, assuming that your server disks are not doing things like hosting SQL database storage. They are potentially attractive for being smaller, more mechanically reliable, and drawing less power. I feel that they'll get really popular when small ones reach a dollar per GB, so a 60 GB SSD costs around $60; 60 GB is generally plenty for a server system disk and that price is down around the 'basic SATA drive' level. It's possible that my attitudes on pricing are strongly influenced by the fact that as a university, we mostly don't have any money.)

Note that user data is another thing entirely. In most environments it's going to see the lion's share of disk IO, both reads and writes, much more of it will be random IO than the system disk sees, and a lot of it will be things that people are actually waiting for.

PS: it's possible that the inevitable future day when I switch to SSDs for my system disk(s) will cause me to eat these words. I'm not convinced it's likely, though.

Sidebar: mirroring and SSDs

Some people will say that it's no problem using a single SSD for your system disk because it's only your system disk and SSDs are much more reliable than HDs (after all, SSDs do not have the mechanical failure issues of HDs). I disagree with them. I do not like system downtime, I have (re)installed systems more than enough times already, and I count on my workstations actually working so that I can get real work done.

(If you gave me an SSD for free I would probably use it as an unmirrored but automatically backed up system disk, paired with a HD that I could immediately boot from if the SSD died. But if I'm paying for it, I want my mirrors. And certainly I want them on servers, especially important servers.)

SSDsAndBottlenecks written at 22:50:31; Add Comment

The two sorts of display resolution improvements

Recently I read Matt Simmons' Retina display for Apple: Awesome for everyone, which is roughly about how Apple's increasing use of high resolution 'retina' displays will be good for everyone who isn't happy with garden variety 1080p displays. While I like the general sentiment, I want to sound a quiet note of contrariness because I think that more 'retina' displays will not necessarily do what Matt Simmons wants.

You see (and as Matt knows), there are two sorts of resolution in action here: physical screen size and DPI. Apple's products with retina displays demonstrate this beautifully; they are physically small but have a very high DPI (at least by computer standards; they are low but acceptable by print standards). What most sysadmins want is more physical size with an acceptable resolution; this is the 'more terminal windows' solution. Based on current practice we're okay with relatively low resolutions, on the order of 75 to 95 DPI or so.

(Today I think of 24" 1900x1200 widescreen displays as relatively commodity LCDs and where the starting point for decent sysadmin gear should be; 19" LCDs now strike me as kind of small. This is 95 DPI and also large enough that two side by side horizontally are hard to fit on a desk and expose other issues.)

Increasing DPI without increasing size doesn't really let you get any more windows on the screen; instead it gives you better and smoother rendering of text and other things. This can sometimes make it feasible to use smaller text for windows, but there are strong limits for this (unless you like squinting at tiny text, and trust me, you don't really). Higher DPI plain looks better, though, and Apple's 'retina' displays are up into what used to be the realm of basic laser printing.

There are good reasons for Apple and other makers of small devices to push for high DPI displays. The devices generally can't get physically larger screens (they won't fit the ergonomics), and higher DPI makes small text and non-Latin text much more readable (my impression is that ideogram-based writing systems especially benefit from high DPI). But I'm not at all confident that these high DPI small devices will get makers of conventional displays to do anything to pick up their game, since the environments and the constraints are so different. It probably doesn't help that many people buy regular displays based primarily on price.

(One issue with high DPIs in general is that the sheer pixel counts start getting daunting for decent sized physical displays with high DPIs. My 24" widescreen display at 300 DPI would be around 6000 by 3800 pixels. With 24-bit colour the framebuffer alone is 65 megabytes of RAM, which means 3.8 Gbytes a second of read bandwidth simply to output it at 60 Hz.)

TwoSortsOfDisplayResolution written at 01:34:12; Add Comment

2012-02-13

The problem with long-term production support of things

In FreeBSD and release engineering, Nathan Willis summarizes one of John Kozubik's suggestions for FreeBSD release engineering:

[...] Second, the project should focus on just one production release at a time, and commit to a definite support schedule (Kozubik suggests five years as a production release, followed by five years as a "legacy" release). [...]

My reaction is that this is completely infeasible in any open source project that developers will actually want to work on. As I mentioned, this will be a Rorschach test for what you consider supporting a production release to mean; I'm basing my opinion on my views.

To understand the problem, let's start with the implication of only having a production release. The problem here is that developers want to fix things, and they can only be pacified so long by developing new features. Sooner or later they want to improve or reform or clean up something that already exists, and then the release manager has to stand in the way saying 'no'.

Well, no problem; you'll branch off a development version from the production release. Except now you have two problems. First, you've got to provide new hardware support for the production release, but the development release is going to increasingly diverge from it; over time you tend towards having to develop two different drivers for new hardware, one for the old environment of the production release and one for CURRENT, the development version.

Second, developers generally don't want to wait years before their code is released (sometimes this is phrased differently, from a more technical angle). It's not very motivational to work on something knowing that the next production release is three years away and most people won't see your effort until then. Developers are going to want more frequent releases, perhaps much more frequent. If you don't provide them actual releases, I think you are going to wind up with the old Debian unstable situation all over again (if you don't lose developers).

(Also, note what this does to people who want to pay to have features developed. Those people are not very interested in waiting years before the features appear in the next production release, but they are probably also not very interested in running CURRENT. What you're likely to get is a fragmentation of your production release into multiple releases that are something like 'production, plus some of CURRENT that's proven stable and important, plus important feature X that we needed enough to pay for'. And you run the risk that people will only fund driver development for production and leave you to forward port their drivers to CURRENT.)

If you accelerate your release schedule (say to one production release every two and a half years) but keep the support periods the same, you multiply your effort; now you've got to backport things to multiple production releases, not just one release. Such backporting is drudge work and not very attractive to most developers. You've basically traded one demotivation for developers (slow appearance of code in production) for another (much more backporting work).

(Many developers will probably be okay with that, because as far as they're concerned backporting stuff is somebody else's problem; they only work on CURRENT.)

By the way, much of this is not theoretical. Linux kernel development used to be split into a stable (2.x for even x) and development (2.x for odd x) series of kernels. It didn't work very well, with all sorts of issues and failures, and has been solidly abandoned in favour of a rolling evolution where the Linux kernel hackers have declared that it is someone else's problem to do long term stable releases.

(See also Open source projects and programs versus products.)

Sidebar: the other problem with ten years of support

Unless you take a very narrow view of what an operating system is, a modern OS is made up of far too many components for one team to support. FreeBSD, your favorite Linux distribution, and even Solaris are actually aggregations of software; not just the base kernel, libraries, and utilities, but also the C compiler, environments like Perl and Python, and higher level systems like Apache and Samba and BIND.

The original developers of all of those pieces that you're aggregating together are extremely unlikely to agree to provide ten years of security fixes and major bug fixes for any release of their software. Probably they're not going to agree to even five years. This means that as time goes on, you will be increasingly called on to do all the work of maintaining that software yourself. Just as with backporting fixes to your own components, this is drudge work with all that that implies.

(It's also difficult, because you didn't write the software in the first place.)

LongtermSupportProblems written at 22:18:10; Add Comment

2012-02-06

The advantage of HDMI for dual displays

One of the interesting things that happened during my five years of hardware hibernation is that when I woke up, even low end (aka passively cooled) graphics cards could suddenly drive two digital outputs. Back in 2006 it was common for cards to have one analog and one digital out (eg, the ATI X300 in my work machine had VGA plus DVI), but getting dual digital out required an expensive card with an often noisy fan.

(I actually went through two such cards at work, each time deciding that I couldn't see enough advantage to driving my second display digitally instead of via analog VGA to be worth putting up with the noise. Possibly I wasn't sensitized enough to VGA artifacts and issues.)

What I have to thank for this is HDMI. Now, I'm aware that there's a lot to dislike about HDMI (see eg HDCP), but from my perspective the great thing about it is that it's given even low end cards a second digital output; it seems to be common for cards to have both DVI and HDMI. Some modern displays can be directly driven over HDMI and for the others, a simple cable will go from HDMI to DVI. And so my 2011 low end, passively cooled graphics card will now drive both my displays at work digitally, one directly with DVI and one with an HDMI to DVI cable, which is something that I never managed nicely before now.

(I believe that this has resolution limits. I don't use really big LCDs, so these haven't affected me.)

One of the interesting questions for me is why this happened. Why did graphics card vendors start putting HDMI on everything, where they only rarely did dual DVI? I think that part of the reason is that HDMI uses a physically small connector. DVI uses a relatively big connector and if you look at the back of a graphics card (especially a dual-DVI graphics card), there just isn't all that much physical space there; it's hard to get two DVI connectors and anything else in. By contrast, HDMI connectors are much smaller (I can't find the exact dimensions, but some sources say a third of the size). This makes it much easier to find the physical room for a HDMI connector on a card edge and on a circuit board.

(For example, my current graphics card just fits in VGA, DVI, and HDMI connectors with basically no spare room.)

PS: I don't think it's a coincidence that DisplayPort, the theoretical next generation replacement for DVI, also has a small connector. I suspect that the graphics card layout designers had a few words with people.

(Of course pretty much everything seems to be going to small connectors, with large ones proving awkward. Consider SATA versus IDE, for example. Someone who knew more about electronics than I do could probably write a fascinating article about all of the developments that made narrow-connector interfaces feasible and preferable to the old wide connector ones.)

HDMIDualDisplays written at 23:24:52; Add Comment

2012-02-05

What five years of PC technology changed for me

This fall I got a new home machine, just a bit over exactly five years after I got my previous home machine. It happens that I saved the invoice for my five year old machine, so I dug it out today in order to do a comparison about what five years of progress in PC technology did and didn't change for me.

First off, the progress of five years got me much better prices. My recent home machine cost me only about 60% of what my old home machine did. By itself, this is pretty impressive. Apart from that, running down the major components:

  • CPU: AMD dual core versus much faster Intel quad core. The Intel CPU was cheaper but not by a substantial amount; I think the AMD was probably closer to the high end at the time. I don't know what the benchmark results are, but I got a substantial performance improvement.

  • RAM: This is perhaps the most striking change on a purely numerical level; in 2006 I got 2GB of RAM for more than twice as much as what 16 GB of RAM cost me in 2011. Even in 2006, 2 GB was clearly economizing (I remember debating with myself over 2 GB versus the extra money for 4 GB and deciding that 2 GB should be good enough). In 2011, 16 GB is as much as the motherboard will take with current DIMM densities.

    In short, desktop RAM has become stupid cheap.

    (One index of the change is that in 2006, the 2 GB of RAM cost more than the CPU and was the most expensive single component. In 2011, the 16 GB cost only a bit over half of the CPU.)

  • motherboard: the modern era features more SATA, less IDE, more USB, and not even one external serial port. Motherboards are unexciting. Even in 2006 the motherboard had onboard sound and gigabit Ethernet. The 2011 motherboard probably has better onboard sound, but in practice this doesn't matter to me; my sound needs are modest.

    (The 2006 motherboard was a bit cheaper than the 2011 motherboard, but neither were particularly expensive or advanced ones.)

  • Hard drives changed only moderately at one level; in 2006 I got 320 GB drives for somewhat over twice what 2011's 500 GB drives cost me. In 2011, 500 GB drives are nowhere near state of the art; in 2006, 320 GB drives were not that far out of it.

    (This was before the floods in Thailand.)

    On another level, they changed a lot. The 320 GB hard drives of 2006 were my only storage. The 500 GB drives of 2011 are only for the operating system; my data lives on a pair of 1.5 TB drives (that I had upgraded to some time ago). 500 GB is way overkill for the OS, but there's no real point in using drives that are any smaller; it's not like I'd have saved any significant amount of money.

  • Video card: ATI X800 GT versus ATI HD 5450 with double the memory for less than a third of the price. Toms Hardware theoretically puts these two cards in almost the same performance category, although I'm not sure that's really true. In practice, what happened between 2006 and 2011 is that graphics cards shifted to the point where a basic passively cooled card was clearly more than good enough for what I was doing, even for driving dual displays digitally.

    (I don't yet have dual displays at home, but I do at work and my work machine uses the same card. In fact, my work machine is now a clone of my home machine, just as it was in 2006.)

  • optical drives: in 2006 a DVD burner cost about four times what it did in 2011, and I thought I would listen to CDs enough to justify having a separate CD/DVD reader (rather than put wear and tear on an expensive burner).

    (I was wrong; my CD listening had already dropped off a cliff in early 2006 and never recovered. I still kind of miss that sometimes.)

  • Power supply: in 2006 I didn't trust the power supply that came with the case to really be a good solid one that delivered enough power so I bought a separate one as well. In 2011 I couldn't find any reason to worry about it so I didn't; the power supply you get with a decent quiet case these days is going to be quite good, more than you need (for a PC like the kind I build), and efficient.

In 2006, the most expensive components were the RAM, the CPU, the two hard drives together, and then the video card. In 2011, the most expensive components were the CPU, the motherboard, and the case (more or less tying with the RAM). Another way to put it is that in 2011, the video card, the DVD burner, the hard drives, and pretty much the RAM were all what I considered trivial expenses in the overall machine.

FiveYearsPCChanges written at 02:28:22; Add Comment

2012-02-03

Understanding a subtle Twitter feature

One part of getting on Twitter has been following people, which led me to discover that when you follow someone Twitter doesn't show you all of their public tweets. To summarize what I think is the rule, Twitter excludes any conversations they're having that purely involve other people you don't also follow. Their tweets in the conversation will appear in their public timeline, but not in your view of their tweets.

(This may only apply to relatively new Twitter accounts, or even only to some of them. I've seen Twitter give two different interfaces to two new accounts.)

On the one hand, when I discovered this I was infuriated. If you really did want to see everything (for example, so you could find other people to follow based on who your initial people had interesting conversations with), this made having a Twitter account worse than just perusing the Twitter pages of interesting people.

On the other hand, once I thought about it more I've come to reluctantly admire Twitter's trick with this feature. What it is, from my perspective, is a clever way to reduce the volume impact of following someone and thus make doing so less risky. Without it, following someone would immediately expose you to both their general remarks and to the full flow of whatever conversations they have. With Twitter's way, you are only initially exposed to people's general remarks; you ramp up your exposure to their conversations by following more people, and ramp it down by the reverse.

My feeling is that exposure to an overwhelming firehose of updates is the general problem of social networking. Social networks usually want you to be active and to follow lots of people. But if those people are themselves active, the more people you follow the more volume descends on you, and it's especially bad when you follow very socially active users, the ones having a lot of conversations. This creates a disincentive to follow people and pushes you to scale back. Twitter has this especially badly because it has no separate 'comment' mechanism (comments are important for reducing volume). Twitter's trick here is thus a clever way to reduce the firehose in a natural way that doesn't require user intervention and tuning; you could see it as a way of recreating something like comments in a system that doesn't naturally have them.

Once I realized this, it's certainly been working the way that Twitter probably intended. When I'm considering whether or not to follow someone I don't really look at the volume of their tweets in general; I mostly look just at the volume of their non-conversation tweets, because those are the only ones that I'm going to see. Often this makes me more willing to follow people (and thereby furthers Twitter's overall goal of getting me more engaged with their service).

TwitterVolumeLimit written at 22:48:37; Add Comment

These are my WanderingThoughts
(About the blog)

Full index of entries
Recent comments

This is part of CSpace, and is written by ChrisSiebenmann.
Twitter: @thatcks

* * *

Categories: links, linux, programming, python, snark, solaris, spam, sysadmin, tech, unix, web

This is a DWiki.
GettingAround
(Help)

Search:
By day for February 2012: 3 5 6 13 29; before February; after February.

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.