2009-03-28
Make your web application's interface elements obvious
This is a true story:
I periodically browse a few people's Twitter pages to look in on their tweets. Until a few days ago, when Twitter choked on one of its periodic load issues and failed to serve me any of their usual CSS styling, I did not know that the 'in reply to <whoever>' piece of the tweet status information for replies was clickable and went directly to the original tweet. This discovery was just short of electrifying, because suddenly conversations on Twitter became several orders of magnitude easier to follow.
(Previously I'd been clicking on the username of who the reply was to and browsing back in their tweets to try to find the original tweet based on tweet timestamps, which didn't always work and was in general a pain.)
There's no clue in the visual appearance of the status information that it is an active part of Twitter's user interface. To the extent that the design offers any clues, the fact that the entire status information is in small text and subdued colours signals that it is not very important. Yet something vital for practical usability is lurking there.
Presumably your web application's interface elements are there to be used (if not, one wonders why they're there at all). If you want people to use them, people need to know that they're there to start with, so you need to do something to draw attention to them; not just when the user gets their mouse pointer near enough, but all of the time.
(This is related to my old entry on disappearing links, but there what I was talking about was links in things like article text, not interface elements.)
Sidebar: my theory on why Twitter is doing this
In a spirit of charity, I can see a reason for Twitter to do this: they don't want to draw your attention away from the actual tweet text, because the tweet text is the most important thing. The tweet text is short and (almost) plain text, so anything that they do to make other things nearby more flashy will probably pull people's attention away.
(I still think that they could and should find some way to mark those interface elements; there must be some way to draw attention to them, just not too much attention.)
2009-03-18
An obvious thing about dealing with web spider misbehavior
Here's an important hint for operators of web spiders:
If I have to touch robots.txt or my web server configuration to deal with your spider's behavior, the easiest and safest change is to block you entirely.
It is therefor very much in the interests of web spider operators
to keep me from ever having to touch my robots.txt file in the
first place, because you are not Google.
You should consider things like crawl-delay to be desperate last
ditch workarounds, not things that you routinely recommend to people
you are crawling.
(Yes, this means that web spiders should notice issues like this for
themselves. The best use for crawl-delay
that I've seen suggested is as a way to tell spiders to speed up, not
slow down, their usual crawling rate.)
A corollary to this is that your spider information page should do its best to concisely tell people what they get out of letting you crawl their web pages, because if people have to change their robots.txt to deal with your spider you want them to have as much reason not to block you as possible.
(You had better have a spider information page and mention it in your
spider's User-agent. Ideally it will also rank high in searches for
your spider's official name and for its User-agent string.)
2009-03-15
A realization: planet aggregators have a natural size limit
For reasons that are too complicated to fit within the margins of this entry, I've recently been dipping into reading some planet blog aggregators. The experience has sparked a realization: planet aggregators have a natural, intrinsic size limit.
The example that crystallized this for me was considering the possible growth of Planet DCS. Between professors and graduate students, there are at least three hundred people here who could have blogs aggregated onto Planet DCS. If we assume that 200 of them take seriously the exhortations to blog and that each of them writes just one post every four days (and that the postings are evenly distributed in time), the planet will get 50 new entries a day. Not only is that a lot of entries to read a day, but it means that the planet's web page rolls over awfully fast; if you don't keep up, you miss things.
This is what I mean by planets having an intrinsic size limit. Because of their lack of history, you can only add so many people who are posting so frequently to a planet before it starts being unreadable. If you add enough people and they blog frequently enough, what you wind up with is not an aggregator but a random patchwork view of some recent activity, as the planet rolls over entirely several times a day.
This is kind of unfortunate; it means that if you have a fairly general planet that a lot of people could be aggregated to, its very success could kill it as a useful resource.
(Or the planet operator would be in the uncomfortable position of having to tell people that they couldn't be aggregated on a popular place merely because they hadn't got there early enough.)