The continuity of broad systems or environments in system administration
What I'm going to call system continuity for my lack of a better phrase is the idea that some of the time in some places, you can trace bits of your current environment back through your organization's history. You're likely not still using the same physical hardware, you're probably not using the same OS version and possibly even the OS, the software may have changed as may have how you administer it, but you can point at elements of how things work today and say 'they're this way because N years ago ...'. To put it one way, system continuity means that things have a lineage.
As an example, you have some system continuity in your email system if you're still supporting and using people's email addresses from your very first mail system you set up N years ago, even though you moved from a basic Unix mailer to Exchange and now to a cloud-hosted setup. You don't have system continuity here if at some point people said 'we're changing everyone's email address and the old ones will stop working a year from now'.
You can have system continuity in all sorts of things, and you can lack it in all sorts of things. One hallmark of system continuity is automatic or even transparent migrations as far as users are concerned; one marker for a lack of it is manual migrations. If you say 'we've built a new CIFS server environment, here are your new credentials on it, copy data from our old one yourself if you want to keep it', you probably don't have much system continuity there. System continuity can be partial (or perhaps 'fragmented'); you might have continuity in login credentials but not in the actual files, which you have to copy to the new storage environment yourself.
(It's tempting to say that some system continuities are stacked on top of each other, but this is not necessarily the case. You can have a complete change of email system, including new login credentials, but still migrate everyone's stored mail from the old system to the new one so that people just reconfigure their IMAP client and go on.)
Not everyone has system continuity in anything (or at least anything much). Some places just aren't old enough to have turned over systems very often; they're still on their first real system for many things and may or may not get system continuity later. Some places don't try to keep anything much from old systems for various reasons, including that they're undergoing a ferocious churn in what they need from their systems as they grow or change directions (or both at once). Some places explicitly decide to discard some old systems because they feel they're better off re-doing things from scratch (sometimes this was because the old systems were terrible quick hacks). And of course some organizations die, either failing outright or being absorbed by other organizations that have their own existing systems that you get to move to. Especially in today's world, it probably takes an unusually stable and long-lived organization to build up much system continuity. Unsurprisingly, universities can be such a place.
(Within a large organization like a big company or a university, system continuity is probably generally associated with the continuity of a (sub) group. If your group has its own fileservers or mail system or whatever, and your group gets dissolved or absorbed by someone else, you're likely going to lose continuity in those systems because your new group probably already has its own versions of those. Of course, even if groups stay intact there can be politics over where services should be provided and who provides them that result in group systems being discarded.)
Why I care about Apache's mod_wsgi so much
I made a strong claim yesterday in an aside: I said that Apache with mod_wsgi is the easiest and most seamless way of running a Python WSGI app, and thus it was a pity that it doesn't support using PyPy for this. As I have restarted it here this claim is a bit too strong, so I have to start by watering it down. Apache with mod_wsgi is definitely the easiest and most seamless way to run a Python WSGI app in a shared (web) environment, where you have out a general purpose web server that handles a variety of URLs and services. It may also be your best option if the only thing the web server is doing is running your WSGI application, but I don't have any experience with such environments.
(I focus on shared web environments because none of my WSGI apps are likely to ever be so big and so heavily used that I need to devote an entire web server to them.)
Apache is a good choice as a general purpose web server in the first place, and once you have Apache, mod_wsgi makes deploying a WSGI application pretty straightforward. Generally all you need is a couple of lines of Apache configuration, and you can even arrange to have your WSGI application run under another Unix UID if you want (speaking as a sysadmin, that's a great thing; I would like as few things as possible to run as the web server UID). There's no need to run, configure, and manage another daemon, or to coordinate configuration changes between your WSGI daemon and your web server. Do you want to reload your app's code? Touch a file and it happens, you're done. And all of this lives seamlessly alongside everything else in the web server's configuration, including other WSGI apps also being handled through mod_wsgi.
As far as I know, every other option for getting a WSGI app up and running is more complicated, sometimes fearsomely so. I would like an even simpler option, but until such a thing arrives, mod_wsgi is as close as I can get (and it works well even in unusual situations).
I care about WSGI in general because it's the broadly right way to deploy a Python web app. The easier and simpler it is to deploy a WSGI app, the less likely I am to just write my initial simple version of something as a CGI and then get sucked into very peculiar lashups.