2015-08-29
The mailing list thread to bug tracking system problem
I will start with the thesis: open source projects would benefit from a canonical and easy way to take a mailing list message or thread and turn it into an issue or bug report in your bug tracking system.
It's entirely natural and rather normal for a but report to start with someone uncertainly asking 'is this supposed to happen?' or 'am I understanding this right?' questions on one of your mailing lists. They're not necessarily doing this because they don't know where to report bugs; often they may be doing it because they're not sure that what they're seeing is a bug (or at least a new bug), or they don't know how to file what your project considers a good bug report, and they don't want to take the hit of a bad bug report. It's usually easier to ask questions on a mailing list where some degree of ignorance is expected and accepted than to venture into what can be a sharp-edged swamp of a bug tracker.
If the requester is energetic, they'll jump through a lot of extra hoops to actually file a bug report once they've built up their confidence (or just been pointed to the right place). But in general, the more difficult the re-filing process is the fewer bug reports you're going to get, while the easier it is the more you'll get.
This leads me to my view that most open source projects make this too hard today, usually by having no explicit way to do it because their mailing list systems and bug tracking systems are completely separate things. Maybe this separation can be overcome through cultural changes so that brief pointers to mailing list messages or cut and paste copies from mailing list threads become acceptable as bug reports.
(My impression, perhaps erroneous, is that most open source projects want you to rewrite your bug reports from more or less scratch when you make them. The mailing list is the informal version of your report, the bug tracker gets the 'formal' one. Of course the danger here is that people just don't bother to write the formal version for various reasons.)
PS: I admit that one reason I've wound up feeling this way is that I'm currently sitting on a number of issues that I first raised on mailing lists and still haven't gotten around to filing bug reports for. And by now some of them are old enough that I'd have to reread a bunch of stuff just to recover the context I had at the time and understand the whole problem once again.
2015-08-24
PS/2 to USB converters are complex things with interesting faults
My favorite keyboard and mice are PS/2 ones, and of course fewer and fewer PCs come with PS/2 ports (especially two of them). The obvious solution is PS/2 to USB converters, so I recently got one at work; half as an experiment, half as stockpiling against future needs. Unfortunately it turned out to have a flaw, but it's an interesting flaw.
The flaw was that if I held down CapsLock (which I remap to Control) and then hit some letter keys, the converter injected a nonexistent CapsLock key-up event into the event stream. The effect was that I got a sequence like '^Cccc'. This didn't happen with the real Control keys on my keyboard, only with CapsLock, and it doesn't happen with CapsLock when the keyboard is directly connected to my machine as a PS/2 keyboard. Unfortunately this is behavior that I reflexively count on working, so this PS/2 to USB converter is unsuitable for me.
(Someone else tested the same brand of converter on another PS/2 keyboard and saw the same thing, so this is not specific to my particular make of keyboards. For the curious, this converter was a ByteCC BT-2000.)
What this really says to me is two things. The first is that PS/2 to USB converters are actually complex items, no matter how small and innocuous they seem. Going from PS/2 to USB requires protocol conversion and when you do protocol conversion you can have bugs and issues. Clearly PS/2 to USB converters are not generic items; I'm probably going to have to search for one that not just 'works' according to most reports but that actually behaves correctly, and such a thing may not be easy to find.
(I suspect that such converters are actually little CPUs with firmware, rather than completely fixed ASICs. Little CPUs are everywhere these days.)
The second is the depressing idea that there are probably PS/2 keyboards out there that actively require this handling of CapsLock. Since it doesn't happen with the Control keys, it's not a generic bug with handling held modifier keys; instead it's specific behavior for CapsLock. People generally don't put in special oddball behavior for something unless they think they need to, and usually they've got reasons to believe this.
(For obvious reasons, if you have a PS/2 to USB converter that works and doesn't do this, I'd love to hear about it. I suspect that the ByteCC will not be the only one that behaves this way.)
2015-08-16
My irritation with Intel's CPU segmentation (and why it probably exists)
I'd like the CPU in my next machine to have ECC RAM, for all sorts of good reasons. I'm also more or less set on using Intel CPUs, because as far as I know they're still on top in terms of performance and power efficiency. As I've written about before, this leaves me with the problem that only some Intel CPUs and chipsets actually support ECC.
(It appears that Intel will actually give you a straightforward list of CPUs here, which is progress from the bad old days. Desktop chipsets with ECC support are listed here, and there's always the Wikipedia page.)
One way to describe what Intel is doing here is market segmentation. Want ECC? You'll pay more. Except it's not that simple, because what's missing ECC support in CPUs is the middle models, especially the attractive and relatively inexpensive ones in the i5 and to a lesser extent the i7 line (there are some high-end i7s with ECC support); at the low end there's a number of inexpensive i3s with ECC support, including recent ones. This is market segmentation with a twist.
What I assume is going on is that Intel is zealously protecting the server CPU and chipset market by keeping server makers from building servers that use attractive midrange desktop CPUs and chipsets. These CPUs provide quite a decent amount of performance, CPU cores, and so on, but because they're aimed at the midrange market they sell for not all that much compared to 'server' CPUs (and the bleeding edge of desktop CPUs), which means that Intel makes a lot less from your server. So Intel deliberately excludes ECC support from these models to make them less attractive on servers, where customers are more likely to insist on it and be willing to pay more. Similarly Intel keeps ECC support out of many 'desktop' chipsets so that they don't turn into de facto server chipsets.
(Intel could try to keep CPUs and chipsets out of servers by limiting how much memory they support, and to a certain extent Intel does. The problem for Intel is that desktop users long ago started demanding enough memory for many servers.)
At the same time, Intel supports ECC in lower-end CPUs and chipsets because there's also a market for low-cost and relatively low performance servers; sometimes you just want a 1U server with some CPU and RAM and disk for some undemanding purpose. This market would be just as happy to use AMD CPUs and AMD certainly has relatively low performance CPUs to offer (and I believe they have ECC; if not, they certainly could if AMD saw a market opening). So if you're happy with a two-core i3 in your server or even an Atom CPU, well, Intel will sell you one with ECC support (and for cheap).
However much I understand this market segmentation, it obviously irritates me because I fall exactly into that midrange CPU segment. I don't want the expensive (and generally hot) high end CPUs, but I also want more than just the 2-core i3 level of performance. Since Intel is not about to give up free money, this is where I wish that they had more competition in the form of AMD doing better at making attractive midrange CPUs (with ECC).
(I think that Intel having more widespread ECC support in CPUs and chipsets would lead to motherboard companies supporting it on their motherboards, but I could be wrong.)
2015-08-09
One potential problem with cloud computing for us is the payment model
I spend a certain amount of my spare time trying to think about how we might use cloud computing, and also about reasons we might not be able to. Because of the special nature of universities, one of the potential problems for us is how it changes the payment model. As many people have observed, cloud computing replaces a large one time up front cost (to buy and deploy hardware on premises) with a steady ongoing cost (the monthly cloud bill).
In many environments, this is an attractive change all by itself; it lowers your initial costs, it lets you scale things down if you turn out not to need as much as you expected, it leads to smoother budgets, and so on. Unfortunately, universities are not anything like normal environments. In particular, in universities it is much easier to get 'one time only' money than it is to get an ongoing budget (cf), and even once you've got an ongoing budget the extra challenge is holding on to it for years to come.
The flipside of cloud computing having a steady ongoing cost is that once you buy into cloud computing, you are committed to that cost. Your (cloud-based) infrastructure requires you to keep paying for it every month. Fail to pay at all and you get turned off; be unable to pay for all of it and you have to reduce your infrastructure, shrinking or losing services. By contrast physical hardware bought with one time money is yours now until it falls apart beyond your ability to fix, no matter what happens to budgets in the future. And if does start to fall apart (and it's important), the odds are pretty good that you can scrounge up some more one time money to keep things going.
Perhaps I have just existed in an unusual university environment, but my experience is that ongoing budgets are far from secure no matter what you might have been promised. Sooner or later a big enough budget cut will come up and, well, there you are. This is of course not an issue that's unique to universities, but the lack of a ROI does make it harder to mount certain 'this is worth spending the money' arguments in defense of your ongoing budget.
(As was pointed out to me recently, it's also not enough to just hold on to a fixed-dollars ongoing budget. Your ongoing budget really needs to be adjusted to account for 'inflation', in this case any increases in cloud computing prices or changes in your provider's charging models that mean you pay more.)
On the other side, having a monthly cloud computing bill might make it easier to defend its ongoing budget item precisely because any cuts directly require immediate reductions in services. A budget reduction wouldn't be an abstract thing or living with older hardware for longer, it'd be 'we will have to stop doing X and Y'.