Wandering Thoughts

2015-07-29

A cynical view on needing SSDs in all your machines in the future

Let's start with my tweets:

@thatcks: Dear Firefox Nightly: doing ten+ minutes of high disk IO on startup before you even start showing me my restored session is absurd.
@thatcks: Clearly the day is coming when using a SSD is going be not merely useful but essential to get modern programs to perform decently.

I didn't say this just because programs are going to want to do more and more disk IO over time. Instead, I said it because of a traditional developer behavior, namely that developers mostly assess how fast their work is based on how it runs on their machines and developer machines are generally very beefy ones. At this point it's extremely likely that most developer machines have decently fast SSDs (and for good reason), which means that it's actually going to be hard for developers to notice they've written code that basically assumes a SSD and only runs acceptably on it (either in general or when some moderate corner case triggers).

SSDs exacerbate this problem by being not just fast in general but especially hugely faster at random IO than traditional hard drives. If you accidentally write something that is random IO heavy (or becomes so under some circumstances, perhaps as you scale the size of the database up) but only run it on a SSD based system, you might not really notice. Run that same thing on a HD based one (with a large database) and it will grind to a halt for ten minutes.

(Today I don't think we have profiling tools for disk IO the way we do for CPU usage by code, so even if a developer wanted to check for this their only option is to find a machine with a HD and try things out. Perhaps part of the solution will be an 'act like a HD' emulation layer for software testing that does things like slowing down random IO. Of course it's much more likely that people will just say 'buy SSDs and stop bugging us', especially in a few years.)

tech/CynicalSSDInevitability written at 01:20:14; Add Comment

2015-07-28

Why I still report bugs

I've written plenty of entries here over the years about why people don't report bugs (myself included). Yet I still report bugs against some projects.

The quiet reality of bug reports that I haven't mentioned so far is that when one of my bug report goes well, it's an amazingly good feeling. When I find a bug, isolate it, maybe peer into the code to identify the cause, file a report, and have the project authors say 'that's a nice find and a good analysis', that's a real rush. It's not even so much that I may get a fix for my issue; it's very much also that I have reached into the depths of a mystery and come out with the right answer. It's even better when it helps other people (both in the future and sometimes right away). This is bug reports as the culmination of debugging, and successful debugging itself is a rush of puzzle solving and a victory over a hard problem.

(It's equally a good feeling, although a somewhat different one, when I file a carefully reasoned bug report in favour of something that the software doesn't currently do and I wind up bringing the project people around to my views.)

More than anything else, this is the feeling that keeps me filing bug reports with hospitable projects. It is the feeling that makes bug reports into something other than grinding work and that makes me proud to have written a good report.

I'm often down on bug reporting because I don't have this experience with bug reporting very often. But neither I nor anyone else should forget that bug reporting can feel good too. It's probably not a small part of why we're all willing to keep making those reports, and I don't want to lose sight of it.

(It's easier to remember the negative bug reporting experiences than the powerfully positive ones, partly because humans have very good memories for the negative.)

(As you might guess, this entry was sparked by the experience of recently filing a couple of good bug reports.)

tech/WhyIStillReportBugs written at 00:18:36; Add Comment

2015-07-27

Spammers mine everything, Github edition

It's not news that spammers will trawl everything they can easily get their hands on for anything that looks like email addresses. But every so often I get another illustration of this effect and it strikes me as interesting. This time around it's with the email address I use for Github.

This email address is of course an expendable address, since it's exposed in git commits that I push to Github. It's also exposed to Github itself, but I don't think Github leaks it (at least not trivially. Certainly the address remained untouched by spam for years. Then back in late May the address appeared in the plain text of a commit message. Last week, the spam started showing up.

(The actual spam was one offer from an email spam service provider, one student loan repayment scam, and one relatively incomprehensive one. All came from Chinese IPs; the second and the third came from the same /24 subnet, and the first one came from a SBL CSS listed IP.)

I find the couple of months time delay interesting but probably not too surprising. It's also probably not surprising that spammers mine Github in some way; there's a lot of email addresses exposed there. I'd like to say that spammers probably only mine web pages on Github instead of looking at Git repositories themselves, but that may not be the case; although I'm on Github, my repos are nowhere near as visible as the project where this address appeared.

Still, I found the whole thing kind of interesting (and kind of irritating, too, because now I will probably have to enact increasingly strong defenses on this address until I abandon it).

spam/SpammersMineEverything written at 01:54:27; Add Comment

2015-07-26

Why I increasingly think we're unlikely to ever use Docker

Apart from my general qualms about containers in our environment, I have increasingly wound up thinking that Docker itself is not a good fit for our environment even if we want to use some form of service containerization for various reasons. The problem is access to data.

Our current solution for services gaining access to service data is NFS filesystems from our central fileservers. Master DNS zone files, web pages and web apps for our administrative web server, core data files used by our mail gateway, you name it and it lives in a NFS filesystem. As far as I can tell from reading about Docker, this is rather against the Docker way. Instead, Docker seems to want you to wrap your persistent data up inside Docker data volumes.

It's my opinion that Docker data volumes would be a terrible option for us. They'd add an extra level of indirection for our data in one way or another and it's not clear how they allow access from different Docker hosts (if they do so at all). Making changes and so on would get more difficult, and we make changes (sometimes automated ones) on a frequent basis. In theory maybe we could use (or abuse) Docker features to either import 'local' filesystems (that are actually NFS mounts) into the containers or have the containers do NFS mounts inside themselves. In practice this clearly seems to be swimming upstream against the Docker current.

It's my strong view that much software has ways that it expects to be used and ways that it doesn't expect to be used. Even if you can make software work in a situation it doesn't expect, it's generally not a good idea to do so; you're almost always buying yourself a whole bunch of future pain and heartburn if you go against what the software wants. The reality is that the software is not a good fit for your situation.

So: Docker does not appear to be a good fit for how we operate and as a result I don't think it's a good choice for us.

(In general the containerization stories I read seem to use some sort of core object store or key store as their source of truth and storage. Pushing things to Amazon's S3 is popular, for example. I'm not sure I've seen a 'containerization in a world of NFS' story.)

sysadmin/DockerVersusUs written at 02:02:16; Add Comment

2015-07-25

Everything that does TLS should log the SSL parameters used

I'll start with my tweets:

Every server and client that makes SSL connections should have an option to log the protocols and ciphers that actually get used.
Having logs of SSL protocols/ciphers in use by your actual users is vital to answering the question of 'can we safely disable <X> now?'

As we've seen repeatedly, every so often there are problems uncovered with TLS ciphers, key exchange protocols, and related things. That's certainly been the pattern in the past and a realistic sysadmin has to conclude that it's going to happen again in the future too. When the next one of these appears, one of the things you often want to do is disable what is now a weak part of TLS; for instance, these days you really want to get away from using RC4 based ciphers. But unless you have a very homogenous environment, there's always an important question mark about whether any of your users is unlucky enough to be using something that (only) supports the weak part of TLS that you're about to turn off.

That's the large part of what logging TLS key exchange and cipher choice is important for. If you have such logs, you can say more or less right away 'no one seems to actually need RC4' or 'no one needs SSLv3' or the like, and you can turn it off with confidence. You can also proactively assess your usage of TLS elements that are considered deprecated or not the best ideas but aren't actually outright vulnerable (yet). If usage of problematic elements is low or nonexistent, you're in a position to preemptively disable them.

The other part of logging TLS connection information is that it lets you assess what level of security your users are actually negotiating and what the popular options are. For example, could you tell right now how many of your users are protected by TLS forward security? How widespread is support for and use of elliptic curve cryptography as opposed to older key exchange protocols? And so on and so forth.

(This can also let you assess something about the age of client software and its TLS code, since only new software is likely to be using the latest ciphers and so on. And ancient cipher choices are a good sign of old client software.)

Client logging for things like outgoing SMTP mail delivery with TLS is also important because it tells you something about how picky you can be. If you drop usage of RC4, for example, are you going to be unable to negotiate TLS with some mail servers you deliver mail to regularly, or will you be basically unaffected? How many MTAs do you try to deliver to that have too-small Diffie-Hellman parameters? There are tradeoffs here, but again having information about actual usage is important for making sensible decisions.

sysadmin/SSLLogConnectionInfo written at 02:20:53; Add Comment

2015-07-24

Fedora 22's problem with my scroll wheel

Shortly after I upgraded to Fedora 22, I noticed that my scroll wheel was, for lack of a better description, 'stuttering' in some applications. I'd roll it in one direction and instead of scrolling smoothly, what the application was displaying would jerk around all over, both up and down. It didn't happen all of the time and fortunately it didn't happen in any of my main applications, but it happened often enough to be frustrating. As far as I can tell, this mostly happened in native Fedora GTK3 based applications. I saw it clearly in Evince and the stock Fedora Firefox that I sometimes use, but I think I saw it in a few other applications as well.

I don't know exactly what causes this, but I have managed to find a workaround. Running affected programs with the magic environment variable GDK_CORE_DEVICE_EVENTS set to '1' has made the problem go away (for me, so far). There are some Fedora and other bugs that are suggestive of this, such as Fedora bug #1226465, and that bug leads to an excellent KDE explanation of that specific GTK3 behavior. Since this Fedora bug is about scroll events going missing instead of scrolling things back and forth, it may not be exactly my issue.

(My issue is also definitely not fixed in the GTK3 update that supposedly fixes it for other people. On the other hand, updates for KDE and lightdm now appear to be setting GDK_CORE_DEVICE_EVENTS, so who knows what's going on here.)

Since this environment variable suppresses the bad behavior with no visible side effects I've seen, my current solution is to set it for my entire session. I haven't bothered reporting a Fedora bug for this so far because I use a very variant window manager and that seems likely to be a recipe for more argument than anything else. Perhaps I am too cynical.

(The issue is very reproduceable for me; all I have to do is start Evince with that environment variable scrubbed out and my scroll wheel makes things jump around nicely again.)

Sidebar: Additional links

There is this Firefox bug, especially comment 9, and this X server patch from 2013. You'd think a patch from 2013 would be incorporated by now, but who knows.

linux/Fedora22ScrollWheelProblem written at 00:53:59; Add Comment

2015-07-22

A modest little change I'd like to see in bug reporting systems

It is my opinion that sometimes little elements of wording and culture matter. One of those little elements of culture that has been nagging at me lately is the specifics of how Bugzilla and probably other bug reporting systems deal with duplicate bug reports; they are set to 'closed as a duplicate of <other bug>'.

On the one hand, this is perfectly accurate. On the other hand, almost all of the time one of my bug reports is closed out this way I wind up feeling like I shouldn't have filed it at all, because I should have been sufficiently industrious to find the original bug report. I suspect that I am not alone in feeling this way in this situation. I further suspect that feeling this way serves as a quiet little disincentive to file bug reports; after all, it might be yet another duplicate.

Now, some projects certainly seem to not want bug reports in the first place. And probably some projects get enough duplicate bug reports that they want to apply pressure against them, especially against people who do it frequently (although I suspect that this isn't entirely going to work). But I suspect that this is not a globally desirable thing.

As a result, what I'd like to see bug reporting systems try out is simply renaming this status to the more neutral 'merged with <other bug>'.

Would it make any real difference? I honestly don't know; little cultural hacks are hard to predict. But I don't think it would hurt and who knows, something interesting could happen.

(In my view, 'closed as duplicate' is the kind of thing that makes perfect sense when your bug reporting system is an internal one fed by QA people who are paid to do this sort of stuff efficiently and accurately. In that situation, duplicate bugs often are someone kind of falling down on the job. But this is not the state of affairs with public bug reporting systems, where you are lucky if people even bother to jump through your hoops to file at all.)

tech/BugReportsDuplicateStatus written at 23:48:16; Add Comment

Some thoughts on log rolling with date extensions

For a long time everyone renamed old logs in the same way; the most recent log got a .0 on the end, the next most recent got a .1 on the end, and so on. About the only confusion between systems was that some started from .0 and some from .1, and also whether or not your logs got gzip'd. These days, the Red Hat and Fedora derived Linuxes have switched to lograte's dateext setting, where the extension that old logs get is date based, generally in the format -YYYYMMDD. I'm not entirely sure how I feel about this so far and not just because it changes what I'm used to.

On the good side, this means that a rolled log has the same file name for as long as it exists. If I look at allmessages-20150718 today, I know that I can come back tomorrow or next week and find it with the same name; I don't have to remember that what was allmessages.3 today is allmessages.4 tomorrow (or next week). It also means that logs sort lexically in time order, which is not the case with numbered logs; .10 is lexically between .1 and .2, but is nowhere near them in time.

(The lexical order is also forward time order instead of reverse time order, which means that if you grep everything you get it in forward time order instead of things jumping around.)

On the down side, rolled logs having a date extension means that I can no longer look at the most recently rolled log just by using <name>.0 (or .1); instead I need to look at what log files there are (this is especially the case with logs that are rolled weekly). It also means that I lose the idiom of grep'ing or whatever through <name>.[0-6] to look through the last week's worth of logs; again, I need to look at the actual filenames or at least resort to something like 'grep ... $(/bin/ls -1t <name>.* | sed 7q)' (and I can do that with any log naming scheme).

I'm sure that Red Hat had its reasons to change the naming scheme around. It certainly makes a certain amount of things consistent and obvious. But on the whole I'm not sure I actually like it or if I'd rather have things be the old fashioned way that Ubuntu and others still follow.

(I don't care enough about this to change my Fedora machines or our CentOS 6 and 7 servers.)

linux/LogrollingDateExtThoughts written at 01:21:45; Add Comment

2015-07-20

My brush with the increasing pervasiveness of smartphone GPS mapping

One of the things I do with my time is go bicycling with a local bike club. When you go on group bike rides, one of the things you generally want to have is directions for where the ride is going (if only to reassure yourself if you get separated from the group). When I started with the club back in 2006, these 'cue sheets' for rides were entirely a paper thing and entirely offline; you turned up at the start of the ride and the ride leader handed out a bunch of copies to anyone who wanted or needed one.

(By 2006 I believe that people were mostly creating new cue sheets in word processors and other tools, but some old ones existed only in scanned form that had been passed down through the years.)

Time rolled on and smartphones with GPS appeared. Various early adapters around the club started using smartphone apps to record their rides. People put these ride recordings online and other people started learning from them, spotting interesting new ways to get places and so on. Other people started taking these GPS traces and loading them on their own smartphones (and sometimes GPS devices) as informal guides to the route to supplement the official cue sheets. As time went on, some people started augmenting the normal online ride descriptions for upcoming rides with somewhat informal links to online GPS-based maps of the ride route.

Last year the club started a big push to put copies of the cue sheets online, and alongside the cue sheets it started digitizing many of the routes into GPS route files. For some of the rides, the GPS route files started being the primary authority for the ride's route; the printed cue sheet that the ride leader handed out at the start was generated from them. Finally, this year the club is really pushing people to print their own cue sheets instead of having the ride leader give them out at the start. It's not really hard to see why; even last year fewer and fewer people were asking for copies of the cue sheet at the start of rides and more and more people were saying 'I'm good, I've got the GPS information loaded into my smartphone'.

(This year, on the group rides I've lead I could hardly give out more than a handful of cue sheets. And usually not because people had already printed their own.)

It doesn't take much extrapolation to see where this is going. The club is still officially using cue sheets for now, but it's definitely alongside the GPS route files and more and more cue sheets are automatically generated from the GPS route files. It wouldn't surprise me if by five years from now, having a smartphone with good GPS and a route following app was basically necessary to go on our rides. There's various advantages to going to only GPS route files, and smartphones are clearly becoming increasingly pervasive. Just like the club assumes that you have a bike and a helmet and a few other things, we'll assume you have a reasonably capable smartphone too.

(By then it's unlikely to cost more than, say, your helmet.)

In one way there's nothing particularly surprising about this shift; smartphones with GPS have been taking over from manual maps in many areas. But this is a shift that I've seen happen in front of me and that makes it personally novel. Future shock is made real by being a personal experience.

(It also affects me in that I don't currently have a smartphone, so I'm looking at a future where I probably need to get one in order to really keep up with the club.)

tech/SmartphoneGPSSpreadForMe written at 23:04:54; Add Comment

The OmniOS kernel can hold major amounts of unused memory for a long time

The Illumos kernel (which means the kernels of OmniOS, SmartOS, and so on) has an oversight which can cause it to hold down a potentially large amount of unused memory in unproductive ways. We discovered this on our most heavily used NFS fileserver; on a server with 128 GB of RAM, over 70 GB of RAM was being held down by the kernel and left idle for an extended time. As you can imagine, this didn't help the ZFS ARC size, which got choked down to 20 GB or so.

The problem is in kmem, the kernel's general memory allocator. Kmem is what is called a slab allocator, which means that it divides kernel memory up into a bunch of arenas for different-sized objects. Like basically all sophisticated allocators, kmem works hard to optimize allocation and deallocation; for instance, it keeps a per-CPU cache of recently freed objects so that in the likely case that you need an object again you can just grab it in a basically lock free way. As part of these optimizations, kmem keeps a cache of fully empty slabs (ones that have no objects allocated out of them) that have been freed up; this means that it can avoid an expensive trip to the kernel page allocator when you next want some more objects from a particular arena.

The problem is that kmem does not bound the size of this cache of fully empty slabs and does not age slabs out of it. As a result, a temporary usage surge can leave a particular arena with a lot of unused objects and slab memory, especially if the objects in question are large. In our case, this happened to the arena for 'generic 128 KB allocations'; we spent a long time with around six in use but 613,033 allocated. Presumably at one time we needed that ~74 GB of 128 KB buffers (probably because of a NFS overload situation), but we certainly didn't any more.

Kmem can be made to free up these unused slabs, but in order to do so you must put the system under strong memory pressure by abruptly allocating enough memory to run the system basically out of what it thinks of as 'free memory'. In our experiments it was important to do this in one fast action; otherwise the system frees up memory through less abrupt methods and doesn't resort to what it considers extreme measures. The simplest way to do this is with Python; look at what 'top' reports as 'free mem' and then use up a bit more than that in one go.

(You can verify that the full freeing has triggered by using dtrace to look for calls to kmem_reap.)

Unfortunately triggering this panic freeing of memory will likely cause your system to stall significantly. When we did it on our production fileserver we saw NFS stall for a significant amount of time, ssh sessions stop for somewhat less time, and for a while the system wasn't even responding to pings. If you have this problem and can't tolerate your system going away for five or ten minutes until things fully recover, well, you're going to need a downtime (and at that point you might as well reboot the machine).

The simple sign that your system may need this is a persistently high 'Kernel' memory use in mdb -k's ::memstat but a low ZFS ARC size. We saw 95% or so Kernel but ARC sizes on the order of 20 GB and of course the Kernel amount never shrunk. The more complex sign is to look for caches in mdb's ::kmastat that have outsized space usage and a drastic mismatch between buffers in use and buffers allocated.

(Note that arenas for small buffers may be suffering from fragmentation instead of or in addition to this.)

I think that this isn't likely to happen on systems where you have user level programs with fluctuating overall memory usages because sooner or later just the natural fluctuation of user level programs is likely to push the system to do this panic freeing of memory. And if you use a lot of memory at the user level, well, that limits how much memory the kernel can ever use so you're probably less likely to get into this situation. Our NFS fileservers are kind of a worse case for this because they have almost nothing running at the user level and certainly nothing that abruptly wants several gigabytes of memory at once.

People who want more technical detail on this can see the illumos developer mailing list thread. Now that it's been raised to the developers, this issue is likely to be fixed at some point but I don't know when. Changes to kernel memory allocators rarely happen very fast.

solaris/KernelMemoryHolding written at 01:55:20; Add Comment

(Previous 10 or go back to July 2015 at 2015/07/19)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.