Wandering Thoughts archives

2015-10-24

How my PS/2 to USB conversion issues have shaken out

In this entry, I covered my problems with trying to find a good PS/2 to USB converter so that I could keep using my favorite keyboard and mice. Since then I have more or less come up with a solution, although it's not a complete one.

Based on Chris Wage's recommendation, we got a PS/2 to USB converter that's specifically beloved by the IBM Model M fans. Specifically, the ZioTek 'PS2 Keyboard & Mouse to USB Adapter'; here is its Amazon listing (which is where we got it). This appears to do an excellent job of converting PS/2 keyboards to USB; I used it for several days on my office machine without noticing anything different or experiencing any problems. Unfortunately it's not as good with my PS/2 mouse. Oh, the mouse works, but it's kind of jerky and stuttery compared to the smooth pointer movement I get through the PS/2 port. The result is usable but not pleasant. Fortunately the ZioTek converter can be used with the keyboard alone.

(As always, one never knows if these things are going to stay in production and remain unchanged over the long term. Since this model's cheap, I intend to buy several so I have spares.)

This is not an ideal situation, of course, but I have at least three options. First, now I only really need a single PS/2 port on any new machines, for the mouse (although I'd still like two). Second, I can try to find a PS/2 to USB converter that handles the mouse smoothly even if it mangles keyboard input and just use it for my mouse. And third, I can explore USB mice, either the Contour Mouse or (as suggested by a commentator here) a mouse with a side thumb button that I can repurpose as the middle mouse button. And who knows; by the time I exhaust the first two options, perhaps someone will be making a genuine three button USB mouse.

(The tempting crazy option is to sail off into the uncertain waters of mechanical USB keyboards, or even that and a Contour Mouse. In a way, it feels like I have computer chair reluctance here; it's quite possible that the results would be clearly better than my current setup.)

PS: The one thing I haven't tested is how well the ZioTek converter works in the BIOS during early boot (apparently some USB keyboards and converters have issues here). On the other hand, Amazon reviewers seem to have had this work.

PS2ToUSBMyPlans written at 02:03:32; Add Comment

2015-10-17

Do generic stock servers have a future in a cloud world?

One of the things I've been reading lately is a certain amount of PR about 'the cloud' and about how most everyone with a good sized private datacenter will wind up moving to the cloud instead because they just can't compete with the economics. I don't know enough here to have an opinion, but the people here seem to make a plausible case (both about how much more efficiently Amazon can operate their servers than you can operate yours and how hard it'd be to match all of their management tools with your own software). Given that this future might come to pass, I got to wondering: what happens to stock servers?

The people running Amazon and Google and Facebook and so on are not buying off the rack Dell/HP/Lenovo/etc servers; one of the reasons they can be more efficient than you is that they use custom designs that are adapted to their exact environment. Instead, the people buying those servers in bulk are exactly the big datacenters that are supposed to move to the cloud. Currently, all of the rest of us smaller people buying servers have very likely been benefiting from the volume of big datacenters, since high product volume drives down prices and pays off custom engineering and so on; selling tons of generic 1U servers has to be part of why they've become relatively inexpensive. But what happens to the generic stock server market if that big datacenter volume goes away? As the number of people buying their own servers shrinks, will we still have inexpensive stock servers to buy?

One possible answer is that there's enough volume in the small business sector to sustain at least some of the major players and keep stock servers inexpensive and available. I don't know if I believe this, although I also have no idea how large this market segment actually is. Another potential answer is that while big datacenters in the well connected West may shrink a lot, there are plenty of places where issues like bandwidth and latency will mean that local companies (both big and small) have local servers and this will sustain the server market in general.

Or we may lose those cheap, convenient, readily available servers from companies like Dell, which would probably leave us buying more servers from smaller OEMs like SuperMicro. They would cost more, which would be a bummer, but they might not cost lots more, especially if we got 2U or 3U units instead of 1U ones. We're lucky enough to not really be rack space constrained; for everyone else, well, in the cloud-heavy future the colocation operators may drop their prices for rack space due to reduced demand.

(Before 1U servers became generic popcorn, my impression was that you paid extra for squeezing all of the necessary components into such a small space. I suspect that this is not the case today, due to the high volume.)

StockServerCloudFuture written at 00:56:47; Add Comment

2015-09-30

Why I can't see IPv6 as a smooth or fast transition

Today I got native IPv6 up at home. My home ISP had previously been doing tunneled IPv6 (over IPv4), except that I'd turned my tunnel off back in June for some reason (I think something broke and I just shrugged and punted). I enjoyed the feeling of doing IPv6 right for a few hours, and then, well:

@thatcks: The glorious IPv6 future: with IPv6 up, Google searches sometimes just cut off below the initial banner and search box.
For bonus points, the searches aren't even going over IPv6. Tcpdump says Google appears to RSET my HTTPS TCPv4 connections sometimes.

(Further staring at packet traces makes me less certain of what's going on, although there are definitely surprise RSET packets in there. Also, when I said 'IPv6 up', I was being imprecise; what makes a difference is only whether or not I have an active IPv6 default route so that my IPv6 traffic can get anywhere. Add the default route (out my PPPoE DSL link) and the problems start to happen; delete it and everything is happy.)

Every so often someone says that the networking world should get cracking on the relatively simple job of adopting and adding IPv6 everywhere. Setting aside anything else involved, what happened to me today is why I laugh flatly at anyone who thinks this. IPv6 is simple only if everything works right, but we have plenty of existence proofs that it does not. Enabling IPv6 in a networking environment is a great way to have all sorts of odd problems come crawling out of the woodwork, some of which don't seem like they have anything to do with IPv6 at all.

It would be nice if these problems and stumbling points didn't happen, and certainly in the nice shiny IPv6 story they're not supposed to. But they do, and combined with the fact that IPv6 is often merely nice, not beneficial, I think many networks won't be moving very fast on IPv6. This makes a part of me sad, but it's the same part of me that thinks that problems like mine just shouldn't happen.

(I don't think I'm uniquely gifted in stumbling over IPv6 related problems, although I certainly do seem to have bad luck with it.)

IPv6ComplicationsAgain written at 03:09:58; Add Comment

2015-09-29

Maybe I should try to find another good mini keyboard

As I've mentioned a few times before, I've been using one particular mini keyboard for a very long time now and I've become very attached to it. It's thoroughly out of production (although I have spares) and worse, it uses a PS/2 interface which presents problems in the modern world. One solution is certainly to go to a lot of work to keep on using it anyways, but I've been considering if perhaps I shouldn't try to find a modern replacement instead.

Some people are very attached to very specific keyboards for hard to replicate reasons; just ask any strong fan of the IBM Model M. But I'm not really one of them. I'm attached to having a mini keyboard that's not too mimimal (the Happy Hacking keyboard is too far) and has a reasonably sensible key layout, and I'd like to not have space eaten up by a Windows key that I have no use for, but I'm not attached to the BTC-5100C itself. It just happened to be the best mini keyboard we found back fifteen or more years ago when we looked around for them, or at least the best one that was reasonably widely available and written about.

The keyboard world has come a long way in the past fifteen years or so. The Internet has really enabled enthusiasts to connect with each other and for specialist manufacturers to serve them and to spread awareness of their products, making niche products much more viable and thus available. And while I like the BTC-5100C, I suspect that it is not the ultimate keyboard in terms of key feel and niceness for typing; even at the time it was new, it was not really a premium keyboard. In specific, plenty of people feel that mechanical keyboards are the best way to go and there are certainly any number of mechanical mini keyboards (as I've seen on the periodic occasions when I do Internet searches about this).

So I've been considering trying USB mechanical mini keyboard, just as I've sometimes toyed with getting a three button mouse with a scroll wheel. So far what's been stopping me has been the same thing in both cases, namely how much these things cost. I think I'm willing to pay $100 for a good keyboard I like that'll probably last the near side of forever, but it's hard to nerve myself up to spending that much money without being certain first.

(Of course, some or many places offer N-day money back guarantees. While shipping things back is likely to be kind of a pain, perhaps I should bite the bullet and just do it. Especially since I have a definite history of hesitating on hardware upgrades that turn out to be significant. One of the possible keyboards is even Canadian.)

(Of course there's a Reddit board for mechanical keyboards. I'll have to read through their pages.)

Sidebar: What I want in a mini keyboard layout

Based on my experiences with trying out a Happy Hacking keyboard once (and a few other mini keyboards), my basic requirements are:

  • a separate row of function keys for F1 through F10. I simply use function keys too much to be satisfied with a very-mini layout that only has a four row layout with numbers and then the Q/A/Z letter rows (and gets at function keys via a 'FN' modifier key).

  • actual cursor keys; again, I use them too much to be happy having to shift with something to get them.

  • Backspace and Delete as separate keys. I can live with shifted Insert.
  • Esc as a real (unshifted) key. Vi people know why.

  • A SysRq key being available somehow, as I want to keep on being able to use Linux's magic SysRq key combos. This implies that I actually have to be able to use Alt + SysRq + letters and numbers.

    (I may have to give this up.)

(I think this is called a '75%' layout on Reddit.)

A sensible location for Esc would be nice but frankly I've given up on that; people have been moving Esc off to the outer edges of the keyboard for decades. The last keyboard I saw with a good layout there was the NCD Unix keyboard (which I now consider too big).

The good thing about having these basic requirements is that I can actually rule out a lot of keyboards based purely on looking at pictures of them, without having to hunt down reviews or commentary or the like.

MiniKeyboardContemplation written at 02:14:35; Add Comment

2015-09-23

One thing I'm hoping for in our third generation fileservers

If all goes according to my vague schedule, we should be at least starting to plan our third generation of fileservers in 2018, when our second generation fileservers are four years old. 2018 is not all that far off now, so every so often I think a bit about what interesting things might come up from the evolution of technology over the next few years.

Some things are obvious. I certainly hope our entire core network is reliable 10G (copper) Ethernet by 2018, for example, and I optimistically hope for at least doubling and ideally quadrupling the memory in fileservers (from 64 GB to 128 GB or 256 GB). And it's possible that we'll be completely blindsided by some technology shift that's currently invisible (eg a large scale switch from x86 to ARM).

(I call a substantial increase in RAM optimistic because RAM prices have been remarkably sticky for several years now.)

One potential change I'm really looking forward to is moving to all-SSD storage. Running entirely on SSDs would likely make a clear difference to how responsive our fileservers are (especially if we go to 10G Ethernet too), and with the current rate of SSD evolution it doesn't seem out of the bounds of reality. Certainly one part of this is that the SSD price per GB of storage keeps falling, but even by 2018 I'll be surprised if it's as cheap as relatively large HDs. Instead one reason I think it might be feasible for us is that the local demand for our storage just hasn't been growing all that fast (or at least people's willingness to pay for more storage seems moderate).

So let me put some relatively concrete numbers on that. Right now we're using 2 TB HDs and we have only one fileserver that's got more than half its space allocated. If space growth stays modest through 2018, we could likely replace the 2 TB HDs with, say, 3 TB SSDs and still have growth margin left over for the next four or five years. And in 2018, continued SSD price drops could easily make such SSDs cost about as much as what we've been paying for good 2TB 7200 RPM HDs. Even if they cost somewhat more, the responsiveness benefits of an all-SSD setup are very attractive.

(At a casual check, decent 2TB SSDs are currently somewhere around 4x to 5x more expensive than what we paid for our 2 TB HDs. Today to the start of 2018 gives them two years and a bit to cover that price ground, which may be a bit aggressive.)

SSDFileserverHope written at 02:04:19; Add Comment

2015-09-16

There are two different scenarios for replacing disks in a RAID

One possible reply to my annoyance at btrfs being limited to two-way mirrors is to note that btrfs, like many RAID systems, allows you to explicitly replace disks. While btrfs is in the process of doing this, it maintains three-way redundancy (as do some but not all RAID systems); only at the end, with the new disk fully up and running, does it drop the old disk out of the replacement. If something goes wrong with the new disk during this process you are presumably no worse off than before. This is certainly better than the alternative, but it's not great because it misses one usage case. You see, there are two different scenarios for replacing disks here.

In the first scenario you are replacing a dying disk (or at least what you think is one). You don't trust it and the sooner you get data off it the better. As a result, a new unproven disk is strictly better than the old disk because at least the new disk (probably) isn't about to die. Discarding the old disk the moment all the data is fully copied to the new disk is perfectly fine; you trust the new disk at least as much as the old disk.

In the second scenario you are replacing a (currently) good disk with what you think is a better one; it has more space, it is more modern, it is a SSD instead of a HD, whatever. However, this new disk is unproven. It could have infant mortality, bad performance, or just some sort of compatibility problem in your environment for various reasons. You trust it less than the old proven disk (which is known to work fine) and so you really don't want to just discard the old disk once the data is fully copied to the new disk. You want the new disk to prove itself for a while before you fully trust it and you want to preserve your redundancy while that trust is being built (or lost, if there are problems).

It is generally the second disk replacement scenario where people want persistent N-way redundancy. Certainly it's where I want it. N-way redundancy during the data copy from the old drive to the new drive is not good enough, because the new drive doesn't really get proven enough during just that.

Unfortunately the second scenario probably works best with mirroring. It's my view that good RAID-[56] systems should have some way to have a component device that's actually two devices paired together, but they're unlikely to want to have this in routine operation for long.

(A RAID-[56] system that supports a true 'replace' operation needs the ability to run two disks in parallel for a while as data copies over. Ideally it would be doing reads from the new disk as well as from the old disk just in case the new disk writes fine but has problems on reads.)

RaidReplaceDiskScenarios written at 01:28:59; Add Comment

2015-09-04

Consistency and durability in the context of filesystems

Here's something that I've seen trip people up more than once when they talk about filesystems. When we talk about what guarantees a filesystem provides to programs that write data to it, we can talk about two things and the difference between them can be important.

Durability is when you write something or change the filesystem and it's still there after the system crashes or loses power unexpectedly. Durability is what you need at a high level to say 'your email has been received' or 'your file has been saved'. As everyone hopefully knows, almost no filesystem provides durability by default for data that you write to files and many don't provide it for things like removing or renaming files.

What I'll call consistency is basically that the filesystem preserves the ordering of changes after a crash. If you wrote one thing then wrote a second thing and then had the system crash, you have consistency if the system will never wind up in a state where it still has the second thing but not the first. As everyone also hopefully knows, most filesystems do not provide data consistency by default; if you write data, they normally write bits of it to disk whenever they find it convenient without preserving your order. Some but not all filesystems provide metadata consistency by default.

(Note that metadata consistency without data consistency can give you odd results that make you unhappy. Consider 'create new file A, write data to A, remove old file B'; with metadata consistency and no data consistency or forced durability, you can wind up with an empty new file A and no file B.)

Durability and consistency are connected but one does not necessarily require the other except in the extreme case of total durability (which necessarily implies total consistency). In particular, it's entirely possible to have a filesystem that has total consistency but no durability at all. Such a filesystem may rewind time underneath applications after a crash, but it will never present you with an impossible situation that didn't exist at some pre-crash point; in the 'write A, write B, crash' case, you may wind up with nothing, A only, or A and B, but you will never wind up with just B and no A.

(Because of its performance impact, most filesystems do not make selective durability of portions of the filesystem impose any sort of consistency outside of those portions. In other words, if you force-flush some files in some order, you're guaranteed that your changes to those files will have consistency but there's no consistency between them and other things going on.)

Applications not infrequently use forced flushes to create either or both of durability (the DB committed the data it told you it did) and consistency (the DB's write log reflects all changes in the DB data files because it was flushed first). In some environments, turning off durability but retaining or creating consistency is an acceptable tradeoff for speed.

(And some environments don't care about either, because the fix procedure in the face of an extremely rare system crash is 'delete everything and restart from scratch'.)

Note that journaled filesystems always maintain consistent internal data structures but do not necessarily guarantee that consistency for what you see, even for metadata operations. A journaled filesystem will not explode because of a crash but it may still partially apply your file creations, renames, deletions and so on out of order (or at least out of what you consider order). However it's reasonably common for journaled filesystems to have fully consistent metadata operations, partly because that's usually the easiest approach.

(This has some consequences for developers, along the same lines as the SSD problem but more so since it's generally hard to test against system crashes or spot oversights.)

FSConsistencyAndDurability written at 01:10:32; Add Comment

2015-08-29

The mailing list thread to bug tracking system problem

I will start with the thesis: open source projects would benefit from a canonical and easy way to take a mailing list message or thread and turn it into an issue or bug report in your bug tracking system.

It's entirely natural and rather normal for a but report to start with someone uncertainly asking 'is this supposed to happen?' or 'am I understanding this right?' questions on one of your mailing lists. They're not necessarily doing this because they don't know where to report bugs; often they may be doing it because they're not sure that what they're seeing is a bug (or at least a new bug), or they don't know how to file what your project considers a good bug report, and they don't want to take the hit of a bad bug report. It's usually easier to ask questions on a mailing list where some degree of ignorance is expected and accepted than to venture into what can be a sharp-edged swamp of a bug tracker.

If the requester is energetic, they'll jump through a lot of extra hoops to actually file a bug report once they've built up their confidence (or just been pointed to the right place). But in general, the more difficult the re-filing process is the fewer bug reports you're going to get, while the easier it is the more you'll get.

This leads me to my view that most open source projects make this too hard today, usually by having no explicit way to do it because their mailing list systems and bug tracking systems are completely separate things. Maybe this separation can be overcome through cultural changes so that brief pointers to mailing list messages or cut and paste copies from mailing list threads become acceptable as bug reports.

(My impression, perhaps erroneous, is that most open source projects want you to rewrite your bug reports from more or less scratch when you make them. The mailing list is the informal version of your report, the bug tracker gets the 'formal' one. Of course the danger here is that people just don't bother to write the formal version for various reasons.)

PS: I admit that one reason I've wound up feeling this way is that I'm currently sitting on a number of issues that I first raised on mailing lists and still haven't gotten around to filing bug reports for. And by now some of them are old enough that I'd have to reread a bunch of stuff just to recover the context I had at the time and understand the whole problem once again.

MailingListToBugReport written at 02:27:12; Add Comment

2015-08-24

PS/2 to USB converters are complex things with interesting faults

My favorite keyboard and mice are PS/2 ones, and of course fewer and fewer PCs come with PS/2 ports (especially two of them). The obvious solution is PS/2 to USB converters, so I recently got one at work; half as an experiment, half as stockpiling against future needs. Unfortunately it turned out to have a flaw, but it's an interesting flaw.

The flaw was that if I held down CapsLock (which I remap to Control) and then hit some letter keys, the converter injected a nonexistent CapsLock key-up event into the event stream. The effect was that I got a sequence like '^Cccc'. This didn't happen with the real Control keys on my keyboard, only with CapsLock, and it doesn't happen with CapsLock when the keyboard is directly connected to my machine as a PS/2 keyboard. Unfortunately this is behavior that I reflexively count on working, so this PS/2 to USB converter is unsuitable for me.

(Someone else tested the same brand of converter on another PS/2 keyboard and saw the same thing, so this is not specific to my particular make of keyboards. For the curious, this converter was a ByteCC BT-2000.)

What this really says to me is two things. The first is that PS/2 to USB converters are actually complex items, no matter how small and innocuous they seem. Going from PS/2 to USB requires protocol conversion and when you do protocol conversion you can have bugs and issues. Clearly PS/2 to USB converters are not generic items; I'm probably going to have to search for one that not just 'works' according to most reports but that actually behaves correctly, and such a thing may not be easy to find.

(I suspect that such converters are actually little CPUs with firmware, rather than completely fixed ASICs. Little CPUs are everywhere these days.)

The second is the depressing idea that there are probably PS/2 keyboards out there that actively require this handling of CapsLock. Since it doesn't happen with the Control keys, it's not a generic bug with handling held modifier keys; instead it's specific behavior for CapsLock. People generally don't put in special oddball behavior for something unless they think they need to, and usually they've got reasons to believe this.

(For obvious reasons, if you have a PS/2 to USB converter that works and doesn't do this, I'd love to hear about it. I suspect that the ByteCC will not be the only one that behaves this way.)

PS2ToUSBInterestingIssue written at 01:22:59; Add Comment

2015-08-16

My irritation with Intel's CPU segmentation (and why it probably exists)

I'd like the CPU in my next machine to have ECC RAM, for all sorts of good reasons. I'm also more or less set on using Intel CPUs, because as far as I know they're still on top in terms of performance and power efficiency. As I've written about before, this leaves me with the problem that only some Intel CPUs and chipsets actually support ECC.

(It appears that Intel will actually give you a straightforward list of CPUs here, which is progress from the bad old days. Desktop chipsets with ECC support are listed here, and there's always the Wikipedia page.)

One way to describe what Intel is doing here is market segmentation. Want ECC? You'll pay more. Except it's not that simple, because what's missing ECC support in CPUs is the middle models, especially the attractive and relatively inexpensive ones in the i5 and to a lesser extent the i7 line (there are some high-end i7s with ECC support); at the low end there's a number of inexpensive i3s with ECC support, including recent ones. This is market segmentation with a twist.

What I assume is going on is that Intel is zealously protecting the server CPU and chipset market by keeping server makers from building servers that use attractive midrange desktop CPUs and chipsets. These CPUs provide quite a decent amount of performance, CPU cores, and so on, but because they're aimed at the midrange market they sell for not all that much compared to 'server' CPUs (and the bleeding edge of desktop CPUs), which means that Intel makes a lot less from your server. So Intel deliberately excludes ECC support from these models to make them less attractive on servers, where customers are more likely to insist on it and be willing to pay more. Similarly Intel keeps ECC support out of many 'desktop' chipsets so that they don't turn into de facto server chipsets.

(Intel could try to keep CPUs and chipsets out of servers by limiting how much memory they support, and to a certain extent Intel does. The problem for Intel is that desktop users long ago started demanding enough memory for many servers.)

At the same time, Intel supports ECC in lower-end CPUs and chipsets because there's also a market for low-cost and relatively low performance servers; sometimes you just want a 1U server with some CPU and RAM and disk for some undemanding purpose. This market would be just as happy to use AMD CPUs and AMD certainly has relatively low performance CPUs to offer (and I believe they have ECC; if not, they certainly could if AMD saw a market opening). So if you're happy with a two-core i3 in your server or even an Atom CPU, well, Intel will sell you one with ECC support (and for cheap).

However much I understand this market segmentation, it obviously irritates me because I fall exactly into that midrange CPU segment. I don't want the expensive (and generally hot) high end CPUs, but I also want more than just the 2-core i3 level of performance. Since Intel is not about to give up free money, this is where I wish that they had more competition in the form of AMD doing better at making attractive midrange CPUs (with ECC).

(I think that Intel having more widespread ECC support in CPUs and chipsets would lead to motherboard companies supporting it on their motherboards, but I could be wrong.)

IntelCPUSegmentationIrritation written at 02:29:23; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.