SCGI versus FastCGI
May 7, 2006
SCGI and FastCGI are both CGI 'replacements', in that they are protocols for forwarding HTTP requests to persistent daemons instead of starting a possibly big, heavyweight program for each request. Ideally your web server will have a built in gateway for them; less ideally you can run a tiny, fast to start CGI program to talk the protocol to your persistent daemon. There's discussions around the Internet about which one is better; you can find people on both sides, and to some extent it depends on what your web server supports best (lighttpd seems to prefer FastCGI, for example).
(This is a good overview of the whole subject and its history.)
From my perspective SCGI is the clear winner, for a basic reason: SCGI is simple enough to actually implement.
FastCGI is a very complicated protocol (see here), with all sorts of features and a bunch of worrying about efficiency. SCGI is dirt simple; the specification is only 100 lines long, and you can implement either end of it in the language of your choice in an hour or so. Since I'm not just plugging existing components together, this difference is important.
(Some people have even reported that SCGI's simplicity means that it runs faster than FastCGI in practice.)
For all the extra complexity of FastCGI, all I seem to get is the ability to send standard error back to the web server in addition to standard output. I think I can live without that. (Of course it's hard to tell if that's all, since the FastCGI specification is so large.)
I like to think that there's a general idea lurking here: simple protocols and things are often better because they are simpler to use and to integrate into environments. It's rare that existing pieces (that support some complicated protocol) are all perfect fits for what people want to do; when you have to adopt things, the simpler they are the easier it is.
Sidebar: what about plain old HTTP?
There's some opinions that the real answer is for the persistent daemon that does the real work to just speak HTTP directly and have requests proxied to it. I am personally dubious about this; I would much rather delegate the job of dealing with all of the tricky, complex bits of HTTP to a dedicated program, ie the web server. Ian Bicking has another set of reasons for coming to the same conclusion.
(Another person arguing the non-HTTP side, for reasons pretty similar to mine, is here.)
Comments on this page:
* * *
Atom feeds are available; see the bottom of most pages.