Wandering Thoughts archives

2010-01-31

Thinking about syndication feeds and spoilers

DWiki has always had the ability to do the common blog thing of 'click here to see the rest of the entry'; when I put it in, I expected to use it for things like the detailed stats at the end of this entry. Because I am crazy that way, I built the feature so that it could apply on the main page (pages, really), in syndication feed entries, or both, depending on what options I turned on in any particular entry.

In practice, it turned out that I really don't like using cuts in syndication feed entries, for at least two reasons. First, syndication feed readers already have good ways to skip parts of entries and even whole entries (or at least they should), which makes cutting for volume mostly unnecessary. Second, partial entries are in annoying in general because they effectively force you out of your syndication feed reader and into your browser in order to read the full entry.

(In fact it turns out that I don't like cuts very much in general, so I barely use them even on the main pages.)

However, this does leave one case unhandled: spoilers. Places like the anime blogging community have come up with decent Javascript-based solutions for people who are reading your main site, but this is a complete non-starter in syndication feeds. In fact you can't even count on the old 'set the colour of the text to the background colour' trick, as modern syndication feed readers can strip styling as well.

My reluctant conclusion is that handling spoilers may well call for using a cut even in syndication feeds, with the annoyance of having to click off to read the entry being the lesser of two evils. The other approach is just to note that there will be spoilers at the start of an entry and count on people to use their feed reader's 'skip to next entry' feature.

(Spoilers are not generally relevant to WanderingThoughts, but they sometimes come up for me elsewhere.)

CutsInSyndicationFeeds written at 01:39:52; Add Comment

2010-01-29

A theory about Apple's new iPad

Like a lot of other people, I'm not very interested in the iPad myself for all of the obvious reasons for an open source person who likes doing random things to his computers (see Tim Bray for a representative example). But I have a theory about what Apple is up to here, and it goes like this:

The iPad is a computer for people who do not like computers, not a computer for people who like computers.

The problem with making computers for people who like computers is that it is more and more a limited market with little potential for real growth. Most people who like computers already have one (or several), and these computers are generally pretty adequate ones. Without major technology improvements to obsolete existing hardware on a regular basis, you are are down to getting your sales from a moderate stream of new people, people replacing worn out machines, and whatever market share you can steal from your competitors (who are all trying to steal your market share in turn).

(In short, selling computers to people who like computers has become a mature market. Mature markets are boring and unspectacular, and companies in mature markets don't grow much.)

Selling computers to people who do not like computers is much easier; either they don't have a computer yet or they don't much like the one that they have. This is a growth market, potentially a very large one, provided that you actually have a computer that these people will like. Which is where the iPad's restrictions come into the picture.

People who do not actively like computers do not care about a lot of the computer stuff; they just want the computer to do things for them. All of the fiddling around that is necessary (even on a Mac) to get the computer to do things and to keep it doing those things is an annoyance to these people, and if you want to sell to them you need to make as much of it vanish as possible. Apple doesn't intrinsically need a closed and controlled box to do this, but it does need something that just works, all the time, and getting that is less effort with a closed box than with an open one (and it's in Apple's inclinations anyways). And Apple is very good about making the magic work.

Netbooks made vague attempts at this market, but they failed to be sufficiently appealing, ie sufficiently different from the same old computer that these people don't like. Apple is not making that mistake; it has targeted this market as the iPad's primary market, so the iPad's limitations are entirely deliberate and consciously thought out. And Apple is doing this in order to tap into another major market and that market's explosive growth potential, just as it did with the iPod and the iPhone.

(I'm not the only person thinking along these lines; see here, here, John Gruber, and here, for a random sampling pulled from Hacker News's front pages. Also, my thinking about this owes a debt to Thom Hogan's writing on dpreview about what camera companies will need to do in order to keep growing despite the DSLR market maturing and flattening.)

Sidebar: on some iPad restrictions

Lack of a physical keyboard probably isn't going to be much of a drawback for the 'don't like computers' market, because I suspect that they don't spend all that much time typing away on a personal computer, or even interacting with it (which will help avoid the well studied fatigue effects of long term touchscreen usage). Similarly, not being able to run multiple applications at once sounds awfully like a feature, not a flaw, since it avoids all sorts of confusions and annoyances and likely mimics how these people already prefer to use computers (especially ones with relatively small screens).

The appeal of a single, well designed and simple place and process to get applications that just work should hopefully be obvious. The applications may be not worth your money, but people waste money all the time; what they won't be is dangerous to the overall experience.

(Think of it like iTunes. What you get may turn out to be bad music, but the experience of doing it is pretty decent and the music will always actually play. This is a lot different from the experience of getting either digital music or software in a more open environment.)

IPadTheory written at 02:58:36; Add Comment

2010-01-26

Why the modern age is great

In some quarters, it is popular to grumble about how much better things were back in the good old days of computers, Unix, or whatever. I don't hold with this view at all; I think that the modern age is great, and now I'll tell you one reason why.

I started working with Unix systems what is now quite a long time ago. Back in those days, you couldn't get SCCS without paying extra money to AT&T (and sometimes you couldn't get it at all, because your Unix vendor hadn't paid extra for it themselves), and you couldn't get RCS without a Unix source license, because RCS needed a customized version of diff (and there was no 'GNU diff'). Thus, if you had a source license, you generally used RCS and counted yourself lucky.

(Certainly universities pretty much didn't have the budget to buy extras from AT&T. Source control? That was a luxury, we weren't commercial developers, we hardly needed that. (AT&T's mad unbundling of the useful pieces of Unix is another rant entirely.))

Contrast that to today, where I find myself vaguely agonizing over which highly sophisticated distributed version control system to use. This shows both how far we've come and how plain nice our computing environments have become; when the big issue is just how awesome my version control system is going to be, we're doing pretty well.

(System administration has been going through this for decades. Huge swatches of boring routine work that people did even fifteen years ago are now completely gone, at least if you're not using Solaris 10.)

ModernAgeGreatness written at 01:30:08; Add Comment

2010-01-20

One of the things that killed network computers (aka thin clients)

Here is a thesis about network computing's lack of success:

The compute power to deliver your applications has to live somewhere, whether that is in the machine in front of you or on a server that sits in a machine room somewhere. It turns out that the cost of delivering compute power in one box does not scale linearly; at various points, it turns up sharply. For various reasons, there is also a minimum amount of computing power that gets delivered in boxes; it is generally impossible to obtain a box for a cost below $X (for a moderately variable $X over time), and at that price you can get a certain amount of computing.

The result of these two trends is that it is easier and more predictable to supply that necessary application compute power in the form of a computer on your desk than a terminal ('network computer') on your desk and 1/Nth of a big computer in the server room. The minimum compute unit that you can buy today is quite capable (we are rapidly approaching the point where the most costly component of a decent computer is the display, and you have to buy that either way), and buying N of the necessary minimum compute power in the form of a compute server or three is uneconomical by comparison. This leaves you relying on over-subscribing your servers for the theoretical peak usage, except that you will sooner or later actually hit peak usage (or at least enough usage) and then things stop working. You wind up not delivering predictable compute power to people, power that they can always count on having.

(This issue hits much harder in environments where there is predictable peak usage, such as undergraduate computing. We know that sooner or later all undergraduate stations will be in use by people desperately trying to finish their assignments at the last moment.)

I don't think that cloud computing is going to fundamentally change this, because cloud computing still does a significant amount of work on the clients and probably always will. (In fact I think that there are strong economic effects pushing cloud computing applications to put as much of the work on the client side as they can; the more work you can have the client browser do in various ways, the less server computing power you need.)

(This was somewhat sparked from reading this.)

NetworkComputingLocation written at 02:23:15; Add Comment

2010-01-04

Some thoughts on battery backup for RAID controller cards

In a comment on my entry on software RAID's advantages I was asked what I thought about sofware RAID's lack of battery backup units, as you can get on better RAID controller cards. To answer that, I'm going to start by asking my traditional question: how does having a BBU RAID card improve your system performance?

A RAID card with a battery backup unit effectively turns synchronous disk writes into asynchronous ones, by buffering such writes in its battery-backed RAM and immediately telling the host OS that the write has completed successfully. In order for this to improve performance, you have to be doing enough synchronous writes to stall your system significantly. It helps if they're relatively slow synchronous writes; the classical case is small synchronous writes to a RAID-5 array, where the small OS-level write actually turns into a couple of reads and a couple of writes.

(There is also a limit to how much a BBU can improve your performance, especially your sustained performance; at a certain write load you hit your disk's performance limits, either in write bandwidth for streaming writes or in IO operations a second for random writes. If you need to do a sustained 500 random writes per second to a single physical disk, no BBU can help you.)

In general, most write workloads are mostly asynchronous; however, there are certainly some that are highly synchronous (database operations, mail servers, anything that does a lot of file write operations over NFS, etc). These days, operating systems are very good at not forcing synchronous writes unless they feel that they really have to, because filesystem authors fully understand that synchronous writes are death to performance. (Sometimes they go overboard in this.)

In exchange for this synchronous write acceleration, you accept a number of potential drawbacks. Obviously BBU RAID cards cost more, you have to use hardware RAID to some degree, the battery backup only lasts so long (although I believe it commonly lasts for days), and the RAID controller has to lie to the host OS about the write being successful. The latter may especially be an issue if you want to use the hardware RAID controller purely for JBOD-with-BBU, and do your actual RAID in software (such as with ZFS); there you would really like the OS level to find out about write errors.

These days, there are often higher-level options than BBU hardware RAID even if you have a lot of synchronous writes. For example, it's increasingly common for filesystems to let you put their logs on very fast disk storage (either SSDs or small fast conventional disks), and this can drastically accelerate synchronous filesystem writes.

(I was going to say that you could always put a UPS on the system as a whole, but that doesn't really solve the problem of synchronous writes unless you tell the operating system to lie about them.)

(Disclaimer: this is partly me thinking through this out loud. As I don't have actual experience with BBU hardware RAID, I could be completely off base on some of this.)

Sidebar: synchronous writes and us

On our workload of aggregate general fileservice, I don't think we have any particular density of system-stalling synchronous writes. While synchronous NFS activities can stall individual filesystems (and possibly individual ZFS pools, since I'm not entirely sure how the ZFS synchronous write process works), we have enough filesystems and ZFS pools and disks that the overall fileservers will continue going along without most people noticing anything.

As such, we don't worry about BBU issues, and in fact we deliberately configure our iSCSI backends to not have write caching, despite them being on UPSes and being considered reliable black boxes.

BatteryBackedRaidThoughts written at 23:18:13; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.