Wandering Thoughts archives


How to deprecate bits of your program

Since this appears necessary, here is how to deprecate some bit of your program in a way that makes sysadmins hate you as little as possible.

  • first, add a warning to your documentation and a 'compatibility switch' that causes your program to use the old behavior that you are deprecating. Of course, the compatibility switch currently does nothing since the old behavior is the current behavior, but now you've let people start explicitly specifying that they need the old behavior.

    If you are changing the behavior of your program (instead of just stopping supporting something), you should also add a switch that enables the new behavior.

(If you are not planning on having a compatibility switch at all, you lose. Flag days make sysadmins hate you with a burning hate, because there is nothing we love quite as much as having to update all sorts of other programs the moment we upgrade an OS.)

  • wait until this new version of your program makes it into many or all of the popular Linux distributions and any other OS that it's commonly used on. This is not just things like Ubuntu and Fedora and Debian; you really need to wait for the long term supported, slow updating distributions like Ubuntu LTS, Red Hat Enterprise (or CentOS, if you prefer to think of it that way), and so on.

    (You need to consider Ubuntu LTS a different distribution than Ubuntu for this, because users of Ubuntu LTS may well not update their systems until the next LTS release comes out.)

    I tend to think that you should wait a minimum of a year no matter what, although given the update schedule of some of these distributions you're probably going to have to anyways.

  • now you can release a version that prints warnings about the old behavior and suchlike. This version must have a way of specifically enabling the new behavior (if there is one as such).

  • wait a distribution update cycle again.

  • finally you can release a version that drops the old behavior, although you have to keep the now vestigial switch that enables the 'new' behavior (even though it now does nothing).

    (If you want to remove it, wait another update cycle. You saw that one coming.)

In less words: don't have any flag days. Always make sure that sysadmins and developers can prepare for a change ahead of time. Let them suppress warnings before warnings are printed and start using new behavior before it becomes mandatory (and then don't break the mechanism for this).

sysadmin/HowToDeprecate written at 22:26:55; Add Comment

The IO scheduler improvements I saw

In the spirit of sharing actual numbers and details for things that I left a bit unclear in an earlier entry, here is more:

First, we switched to the deadline IO scheduler (from the default cfq). I did brief tests with the noop scheduler and found it basically no different from deadline for my test setup, and deadline may have some advantages for us with more realistic IO loads.

My IO tests were sequential read and write IO, performed directly on a test fileserver, which uses a single iSCSI backend. On a ZFS pool that effectively is a stripe of two mirror pairs, switching the backend to deadline increased a single sequential read from about 175 MBytes/sec to about 200 Mbytes/sec. Two sequential reads of separate files were more dramatic; aggregate performance jumped by somewhere around 50 Mbytes/sec. In both cases, this was close to saturating both gigabit connections between the iSCSI backend and the fileserver.

(Since all of these data rates are well over the 115 Mbytes/sec or so that NFS clients can get out of our Solaris fileservers, this may not make a significant difference in client performance.)

I measured no speed increase for a single sequential writer, but it was already more or less going at what I believe is the raw disk write speed. (According to the IET mailing list, other people have seen much more dramatic increases in write speeds.)

I didn't try to do systematic tests; for our purposes, it was enough that deadline IO scheduling had a visible performance effect and didn't seem to have any downsides. I didn't need to know the specific contours of all of the improvements we might possibly get before I could recommend deployment on the production machines.

linux/IOSchedulerImprovements written at 01:27:53; Add Comment

Page tools: See As Normal.
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.