2015-11-29
My feelings about my mechanical mini keyboard
I wrote a while back about my temptation to find a good USB mini keyboard to replace my old PS/2 mini keyboard. Well, I recently gave in to that temptation and got a mechanical mini keyboard, specifically the Matias Mini Quiet Pro, which seemed like close to the best candidate I could find. Although I've only been using it for a few weeks, I want to write my initial feelings down now before my memories of what it was like to use my old keyboard fade too much.
In general, well, the keyboard works (including in the BIOS of my machine when it boots up). It's slightly bigger than my old keyboard (which set off some other changes that are another entry) and it's significantly heavier. It is indeed quiet, just as claimed; it's probably slightly louder than my old keyboard, but not by very much. Certainly it doesn't sound like a clacky mechanical keyboard.
This is effectively my first mechanical keyboard, so I can't say anything about how it feels as compared to other mechanical keyboards. Compared to my old rubber-dome keyboard it feels subtly nicer. On my old rubber-dome keyboard, typing away for long enough could leave me with a low-level feeling that I was slamming my fingers against something hard, probably from bottoming out the keys against their hard stop (I felt it most acutely in the outer fingers of my left hand, which could spend a bunch of time banging on the control and shift keys). This was never painful or anything but it was something that I was definitely aware of. I don't really feel that with my new keyboard, and it's definitely something that I've looked for.
Because the new keyboard's a bit different in size, the actual physical keycaps are in slightly different positions from my old keyboard. This has given me a great demonstration of just how acclimatized my reflexes were (and are) to the exact position of the old keyboard's keys; although I've been getting better over time, I still keep missing keys every so often, usually when I'm just starting to type something. I've also apparently been extremely attuned to just how much pressure I needed to lift from things like the shift and control keys in order to un-shift and un-control things. This is different on the new keyboard, and as a result, well, I've been fixing a certain amount of surprise capital letters and so on. This too has been improving with time. In a few months I'll probably be just as finely tuned to the new keyboard.
(I gave that a helping hand by persuading work to get me one of them too, so I don't have to go back and forth between different keyboards at home and at work.)
The different relative locations of the various function keys has revealed that a number of my window manager key assignments are quite sensitive to the physical location of the function keys I was using. The different key positions on the Matias has moved several function keys further away from my regular hand position than they were before, making it comparatively less convenient to invoke those window manager bindings. I'm still considering how (and if) I want to shuffle key bindings around, but I'll probably wind up making changes sooner or later. Right now I'm being conservative, because I may find that I get used to it in the long run and they're not really any different than before.
(I've already added one adaptation to my environment, but that's another entry.)
2015-11-23
PC laptop and desktop vendors are now clearly hostile parties
You may have heard of Lenovo's SuperFish incident, where Lenovo destroyed HTTPS security on a number of their laptops by pre-installing root certificates with known private keys. Well, now Dell's done it too, and not just on consumer laptops, and it turns out not just one bad certificate but several. One could rant about Dell here, but there's a broader issue that's now clear:
PC vendors have become hostile parties that you cannot trust.
Dell has a real brand. It sells to businesses, not just consumers. Yet Dell was either perfectly willing to destroy the security of business oriented desktops or sufficiently incompetent to not understand what they were doing, even after SuperFish. And this was not just a little compromise, where a certificate was accidentally included in the trust store, because a Dell program that runs on startup puts the certificate back in even when it's removed. This was deliberate. Dell decided that they were going to shove this certificate down the throat of everyone using their machines. The exact reasons are not relevant to people who have now had their security compromised.
If Dell can do this, anyone can, and they probably will if they haven't already done so. The direct consequence is that all preinstalled vendor Windows setups are now not trustworthy; they must be presumed to come from a hostile party, one that has actively compromised your security. If you can legally reinstall from known good Microsoft install media, you should do that. If you can't, well, you're screwed. And by that I mean that we're all screwed, because without trust in our hardware vendors we have nothing.
Given that Dell was willing to do this to business desktops, I expect that sooner or later someone will find similar vendor malware on preinstalled Windows images on server hardware (if they haven't already). Of course, IPMIs on server hardware are already an area of serious concern (and often security issues all on their own), even before vendors decide to start equipping them with features to 'manage' the host OS for you in the same way that the Dell startup program puts Dell's terrible certificate back even if you remove it.
(Don't assume that you're immune on servers just because you're running Linux instead of Windows. I look forward to the grim meathook future (tm jwz) where server vendors decide to auto-insert their binary kernel modules on boot to be helpful.)
Perhaps my gloomy cloud world future without generic stock servers is not so gloomy after all; if we can't trust generic stock servers anyways, their loss is clearly less significant. Smaller OEMs are probably much less likely to do things like this (for multiple reasons).
2015-11-20
What modern version control systems are
If you read about new version control systems these days, it's very common to see them put forward as essentially the expression or manifestation of mathematics. Maybe it's graph theory, maybe it's patch theory, but the basic idea is that you build up some formal model of patching or version control and then build a VCS system that implements it. This is not restricted to recent VCSes, either; version control as a whole has long had a focus on formally correct operations (and on avoiding operations that were not formally correct).
It is my new belief that this is a terrible misunderstanding of the true role of a VCS, or at least a usable VCS that is intended for general use. Put simply, in practice a VCS is the user interface to the formal mathematics of version control, not the actual embodiment of those mathematics. The job of a good VCS is to sit between the fallible, normal user (who does not operate in the domain of formal math) and the underlying formal math, working away to convert what the user does to the math and what the math says to what the user can understand and use.
As a user interface, a VCS must live in the squishy world of human factors, not the pure world of mathematics. That's its job; it's there to make the mathematics widely usable. This is going to frequently mean 'compromising' that mathematical purity, by which we really mean 'translating what the user wants to do into good mathematics'. I put 'compromise' in quotes here because this is only a compromise if you really think that the user should always directly express correct mathematics.
(We know for sure that users will not always do so, so the only way to pretend otherwise is to spit out error messages any time what the user attempts to do is incorrect mathematics (including the error message of 'no such operation').)
Does this mean that the mathematics is unimportant? Not at all, any more than your skeleton is unimportant in determining your shape. The underlying mathematics can and should shape the user experience that the VCS puts forward (and so different formal models of version control will produce VCSes with different feels). After all, one job of a UI is to steer users into doing the right thing by making it the easy default, and the 'right thing' here is partly determined by the specific math.
PS: The exception to this view of VCSes is a VCS written as an academic exercise to prove that a particular set of version control mathematics can actually be implemented and work. This software is no more intended (or suitable) for general use than any other software from academic research.
2015-11-18
VCS bisection steps should always be reversible
So this happened:
@thatcks: I think I just ruined my bisect run with one errant 'hg bisect --bad', because I can't see a way to recover from it in the Mercurial docs.
This is my extremely angry face. Why the hell won't Mercurial give me a list of the bisect operations I did? Then I could fix things.
Instead I appear to have just lost hours of grinding recompilation to a UI mistake. And Mercurial is supposed to be the friendly VCS.
VCS bisection is in general a great thing, but it's also a quite mechanical, repetitive process. Any time you have a repetitive process that's done by people, you introduce the very real possibility of error; when you do the same thing five times in a row, it's very easy to accidentally do it the sixth time. Or to just know that you want the same command as the time before and simply recall it out of your shell's command history except that nope, your reflexes were a bit fast off the mark there.
(It's great when bisection can be fully automated but there are plenty of times when it can't because one or more of the steps requires human intervention to run a test, decide if the result is correct, or the like. Then you have a human performing a series of steps over and over again but they're supposed to do different things at the end step. We should all know how that one goes by now.)
So inevitably, sooner or later people are going to make a mistake during the bisection process. They're going to reflexively mark the point under testing as good when it's actually bad, or mark it as bad when they just intended to skip it, or all of the other variants. It follows directly that a good bisection system that's designed for real people should provide ways to recover from this, to say 'whoops, no, I was wrong, undo that and go back a step' (ideally many steps, all the way back to the start). Bisection systems should also provide a log, so that you can see both what you did and the specific versions you marked in various ways. And they should document this clearly, of course, because stressed out people who have just flubbed a multi-hour bisection are not very good at carefully reading through three or four different sections of your manual and reasoning out what bits they need to combine, if it's even possible.
Of course, this sort of thing is not strictly speaking necessary. Bisection works just fine without it, provided that people don't make mistakes, and if people make mistakes they can just redo their bisection run again from the start. A bisection system with no log and no undo has a pleasantly mathematical sort of minimalism. It's just not humane, as in 'something that is intended to be used by actual humans and thus to cope with their foibles and mistakes'.
Overall, I suppose I shouldn't be surprised. Most version control systems are heavily into mathematical perfection and 'people should just do it right' in general.
(This is a terrible misunderstanding but that's another entry.)
2015-11-16
On public areas on the Net and conversations therein
There have been essentially public areas on the 'net for a long time (from the time before the 'net was the Internet). In all of that time, a pattern that repeats over and over is that they get used for what I'll call closed discussions among an in crowd. These discussions happen in public (in a Usenet newsgroup, on a public mailing list or website, on IRC, on Twitter, etc) so they're not private, but they're not public in the usual sense because they're not open to outside participants to butt in on. When this closed nature is not supported by the technology of the medium (which it usually isn't), it will instead be supported by social mores and practices, including ignoring people and the equivalent of mail filters and Usenet killfiles. There may be flaming or mockery of transgressors involved, too.
(If you want an analogy, what is going on is much like a group of people having a discussion at a restaurant table with you one table over. You can hear them fine and they are in 'public', but of course very few people think it's correct to turn around and join in their discussion and doing so rarely gets a good reaction from the group.)
What this means is that a conversation taking place in nominal public is not necessarily an open invitation for outside people to comment, and if they do they may be summarily ignored or find that there are bad reactions to their words. Equally, it's wrong to assert something like 'all conversations in public must include anyone who wants to participate' or the equivalent, because this is not how things work in practice in the real world (either on the 'net or off it).
As I mentioned, people on the 'net have been doing this with public spaces for a very long time now; this behavior is not at all novel or unusual. People who are shocked, shocked to see this happening in any particular instance (especially when they are shoving themselves into other people's discussions) are at best disingenuous. Wherever they are, people make groups and then talk among themselves.
There are also genuinely open public discussions in those public areas of the 'net, which creates obvious possibilities for confusion and misunderstandings. The cues for closed discussions are not always clear and some number of closed discussions are in practice only semi-closed; if you fit in, you can join in the conversation (indeed, this is how many such discussion groups expand). One way to assess the line between good faith misunderstandings of a situation and something else is the degree of stubborn persistence exhibited by the outsider.