Wandering Thoughts archives


Web pages versus APIs, or my views on handling 'bad' requests

In a comment on my entry on giving up on being cautious in the modern web world, Alan wrote, in part:

I would claim POST to a GET-only URL deserves an error, since the server is potentially throwing away a whole message. [...]

This is absolutely true. In fact, we can be more general; if you accept a POST request that you don't know how to handle, you are throwing away the requester's information (either in the POST body or the message headers or both). If you return some sort of error, at least the requester knows that something went wrong and their data may be lost. So clearly we should always return errors for POST requests that we don't handle or don't know how to handle, including POSTs to GET-only URLs.

My answer is 'not always', because today on the modern web this is where I draw a personal, philosophical dividing line between (web) APIs and web pages.

When what you have is an API, well, part of a good API is detecting and properly responding to errors, because you can't assume that everyone invoking your API will always get it right. If people send you the wrong Content-Type, or POST to the wrong URL by mistake, or any number of other errors, you should tell them about it. If you fail to do this, what happens is at least partly your fault. Sure, the requester made a mistake in doing an improper operation, but you let your side of things down too.

When you have web pages, none of that applies. If people send you incorrect requests, what happens is entirely on their shoulders. You have no 'quality of implementation' obligation to do anything except what is most convenient in general, and as I grumbled about, on the modern web the most convenient thing is generally to just give people the ordinary web page. That's probably the closest and most useful thing to what they wanted, and if it's not it's not your problem.

How do you know if you have an API or some web pages? In my view, it's whether or not people are expected to ever create URLs and requests by hand. If they are going to hand-craft requests more than once in a blue moon, you have an API. If they're going to just follow links in your HTML and copy links into other programs, you have a bunch of web pages. When you have web pages, on the one hand normal programs and people should not be screwing up requests but on the other hand we've seen that many systems will decide that they're free to randomly manipulate your URLs and they expect the result to work (in some sense).

(You can't possibly predict all of the random things that such systems will throw at your web pages, so you will be dealing with requests you don't understand. Such requests are errors in one sense, but they are generally not accidents or mistakes; they have been done deliberately with the expectation that they'll get the same result from your web server that they get from other people's web servers, and that's usually 'the web page'.)

web/WebPagesVersusAPIs written at 00:53:02; Add Comment

Page tools: See As Normal.
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.