The other peculiar effects of grant funding at universities
A long time ago, I wrote about the power that grant funding gives people at universities. But there's a flip side to grant funding, and it is that people with grant funding often don't really have money as the business world thinks of it.
From the outside, it looks like people with grant funding are rolling in cash; they get hundreds of thousands of dollars, or even million dollar grants. From the inside, though, that money is almost entirely tied down for very specific things. Professors do not get to go to grant agencies, tell them 'I would like to do this promising research; it will take about $200K', and walk away with $200K in their research account that they can spend on anything that's necessary to do the research. Instead, both the grant requests and the grant approvals allocate all of that money to quite specific things; so much for buying servers, so much for storage, so much for network switches, so much to pay two people for a year, and so on.
So far, this may sound just like the budgeting process for a department in a company. But here's the kicker for grant funding: you are legally required to only spend the money on what it was approved for. Does it turn out that two people for a year isn't what you actually needed, or you need more servers and less storage than you thought? Do you have a sudden emergency need for money in some other area of the project? Tough. You're pretty much stuck. There is no spending the money on what you need now and justifying it later, or even going to your boss and saying that you'd like to shift the specific allocations around and here's why.
(Naturally there is an entire cottage industry of figuring out how to slide what you really need into the grant's funding categories in a way that will pass auditing, if you ever get audited. For example, just how much disk space does a server have to have before you can say with a straight face that you bought it for storage, not as a compute server?)
One thing that combines somewhat unhappily with this is that grant agencies generally have restrictions on what sort of things they will fund. There is of course an art to describing what you really need in a way that the grant agency will approve funding for and that you can spend the resulting money on with a straight face.
(Sometimes they also effectively have restrictions on who you can buy from, where in theory you can buy from any vendor that is willing to go to the effort but in practice only a few vendors are interested enough to brave the bureaucracy.)
There are sources of relatively unconstrained grant funding, but they are generally not very large when compared to the constrained sort. Generally all of the big ticket grants that sound so impressive are going to come with lots of restrictions on what that money can actually be used for.
(Ie, it is not so much money as somewhat fuzzy things that haven't shown up on the loading dock yet.)
It's the indirect failure modes that will get you
The University of Toronto's Internet link went down recently (well, became really slow and lossy, so we may just be being DDoS'd or something). I'm at home, so when I noticed the link problems I shrugged and carried on; it's not as if my home machine depends on stuff from work, so I didn't expect anything beyond the annoyance of not being able to get to work networks.
(Although the network being unreachable was going to be somewhat inconvenient, since I had a WanderingThoughts entry to write.)
Except that all of my web browsing was achingly slow. Epically, totally slow. Pages would only come up very slowly, or come up but the browser would say they were still loading. This was quite puzzling; my network link wasn't busy and it's not as if I proxy my web traffic through work. A check of my DNS setup confirmed that I was using my local caching DNS server and that server wasn't bouncing everything through work.
And then I looked at my DNS server's query logs:
[...] query [...] www.flickr.com.cs.toronto.edu.
[...] query [...] www.flickr.com.toronto.edu.
[...] query [...] www.flickr.com.
An uncomfortable light dawned. I had work's domains configured as my
search domain list in
/etc/resolv.conf and I had the
set very high (for bad reasons), so every hostname resolution attempt
was trying several university domains first. Normally I don't notice
these because I promptly get negative answers from work's nameservers,
but with the university's Internet link down those queries instead had
to time out before the lookup could move on to trying the real name.
It turns out that modern web pages use a lot of different things from a lot of different domains. When each of these domains takes plural seconds to resolve, loading pages gets really slow. Slow on the initial load (as the browser resolves the actual website IP address) and then slow to finish, as the browser tries to fetch additional resource after additional resource.
This isn't a direct failure mode, where I was routing traffic through work; instead it was an indirect failure mode, where a couple of configuration options had an inobvious effect that was itself relatively invisible in normal operation. Direct failure modes are easy to see and relatively easy to remember; you can, for example, see that all of your traffic goes over your VPN to work, a VPN that is not working. Indirect failures are much less obvious and so are much more interesting (in the sense of causing excitement) and hard to notice in advance.
Many years ago when I first ran into the
ndots option in resolv.conf,
either it behaved differently than it does today or I just wound up
with a mistaken impression about how it works. Back then, I believed
that queries for names with at least
ndots dots in them entirely
ignored the resolv.conf search path and only ever looked up the absolute
hostname. Since we love using abbreviated hostnames around here and
local subdomains can have any number of dots in them, this implied that
essentially no small value of
ndots was safe. Thus I set a very large
one and grumbled, and carried all of this forward when I configured my
This is not how
ndots works today; today,
ndots just sets the point
at which the resolver will try an absolute hostname before trying your
search path instead of only trying an absolute hostname only after
running all the way through it. This is safe, and implies that an
ndots of 2 is generally what I want (since I make frequent use of
'<host>.<subdomain>' to refer to various machines at work).