The risks of using CentOS are split
Recently (for my value of recently) I wound up reading Matt Simmons' entry, CentOS 6 - Great, but for how long?, in which he worries about the delay in CentOS releases compared to RHEL, especially the CentOS 6 release, and what happens if CentOS updates stop being timely. My reaction is that there are two very different risks being conflated here, because major releases like CentOS 6 are not at all like updates to existing releases.
CentOS is strongly based on RHEL, to the point where it tries for binary compatibility, and Red Hat publishes source RPMs for RHEL and RHEL updates (see here). This makes updates for existing CentOS releases a relatively low risk thing; if CentOS flakes out, in theory you can simply take the RHEL source RPMs for the updates, rebuild them, and install them (you may need to update some explicitly declared dependencies if there are slightly different package release numbers). As far as I know, RHEL point releases are simply a rolled up set of current package updates, so building your own CentOS point release and updating machines to it is equally easy.
(We certainly have 'RHEL 5.x' machines that were installed as RHEL 5.0 machines and have been updated ever since then. It is possible that they are slightly different than what we'd get if we reinstalled them from a RHEL 5.x ISO image, but if this is the case I actually consider it a bug in the RHEL 5.x ISO image and I would be just as happy to avoid it.)
The CentOS people themselves have to do somewhat more work than this, but that's because they need to publicly distribute the result of their work and so they need to strip out Red Hat's trademarks and other branding and add their own. If you just use your rebuild RHEL RPMs internally, you're unlikely to care about this issue and so you don't need to do this time consuming work.
(You will have to keep track of Red Hat security updates yourself and so on, though.)
Major version updates (such as creating CentOS 6) are the slow, risky thing. Because they are major version updates you can't just rebuild the RPMs in an existing environment to get the same results if CentOS is too slow, and bootstrapping an entire Linux distribution is somewhat tricky (especially if you're trying to be binary identical to another one). However, these major version 'updates' are exactly the point where it's easiest to migrate to a different base distribution, because as far as I know Red Hat still doesn't support major version to major version updates (and thus neither does CentOS); the official way to move from RHEL 5 to RHEL 6 (or CentOS 5 to CentOS 6) is to reinstall your machines, and if you're reinstalling anyways you can (re)install anything that you want to.
(By the way, this bootstrapping difficulty is likely to be one major reason that CentOS is slow with major version updates.)
PS: from my perspective, the important delay to track for a RHEL derived distribution is the delay between Red Hat's release of a security update and when the derived distribution releases its own updated binary RPM. This is the delay that you really care about if you're running the derived distribution. I am not sufficiently energetic and interested to try to generate these numbers for, eg, CentOS or Scientific Linux.
(I care mostly about security updates because while plain bugs matter, they are generally far less a problem if your distribution is slow. And the delay in releasing the RPMs that brand your machine as being a new point release (eg, 5.x for some new x) are the least important updates of all.)
(To answer a potential question: per WhatLinuxDistributions, we use RHEL here because we have a campus-wide site license for it. As a result I have not looked in detail at any RHEL derived distribution.)
A little thing that irritates me about common WSGI implementations
One of the issues that WSGI has to deal with is the question of
how to connect a WSGI application and a WSGI server together, or
to put it the other way, how to tell a WSGI server what to do
to actually invoke your application. The WSGI specification specifically does not cover
this issue, considering it a server specific issue. However, something
of a standard seems to have grown up in WSGI implementations; you
supply a module name (or sometimes a file), and the WSGI server expects
to import this and find a callable
application object that is your
This makes me twitch.
One of the basic rules of writing sensible Python code is that modules
should do nothing when simply imported. To the extent that they
absolutely must run code (instead of simply defining things), it should
be minimal and focused on things like getting their imports set up
right. Among other things, the more active code you run
import time the harder it is to diagnose failures and bugs by
importing your module and selectively running code from it; the moment
you look at your module, it explodes.
Creating a properly configured WSGI application generally requires
running code, sometimes quite a lot of code. Yet you must have a
application object ready at the end of the
There is a conflict here, and none of the resolutions to it make
me very happy.
(I can think of at least three; you can run all of that code at import
time, you can write a more complex
application front end that defers
running it all until the first request, or you can pretend that you can
configure your application through entirely passive Python declarations.
The latter is the route that Django takes, and I think that it has a
number of bad effects.)
What would be a great deal better is if WSGI implementations had instead standardized a function that you called in order to get a callable application object. This would allow you to have a pure and easily importable set of modules while still doing all of your configuration work before the first request came in (instead of stalling the first request while you frantically do all of the work to configure things).