Wandering Thoughts archives

2012-08-09

Learning something from not testing jQuery 1.8's beta releases

I was reading the jQuery 1.8 release announcement when I ran across this gem:

We don't expect to get any bug reports on this release, since there have been several betas and a release candidate that everyone has had plenty of opportunities to thoroughly test. Ha ha, that joke never gets old. We know that far too many of you wait for a final release before even trying it with your code. [...]

Since we use jQuery a bit and I'm one of those 'didn't test even the release candidate' people I was immediately seized by an urge to justify my inaction. Then I had a realization.

First, the justification. We're not using jQuery in any demanding way, in a situation where we'll notice the improvements in 1.8. Thus we'd be testing a beta or release candidate purely to validate that it's compatible with our code. Unfortunately testing beta code doesn't save us from having to re-test with the actual release; we can't assume that the changes between a beta and the release are harmless. Nor does reporting bugs against the beta really help us since we're not trying to upgrade to 1.8 as fast as possible. This makes testing betas and even release candidates basically a time sink as far as we're concerned.

(If you actively want or need to use the new release then reporting bugs early (against the betas or even the development version) increases the chances that the bugs will be gone in the released version and you can deploy it the moment it passes your tests.)

All of this sounds nice and some of you may be nodding your heads along with me. But as I was planning this entry out I had the realization that what this really reveals is that we have a testing problem. In an environment with good automated tests it should take almost no time and effort to drop a new version of jQuery into a development environment and then run our tests against it to make sure that everything still works. This would make testing betas, release candidates, or even current development snapshots something that could be done casually, more or less at the snap of our fingers. That it isn't this trivial and that I'm talking seriously about the time cost of testing a new version of jQuery is a bad sign.

What's really going on is that I haven't built any sort of automated testing for the browser view of the web app (well, actually, I haven't built tests for any of it, but especially the browser view of things). This means that testing a new version of jQuery requires going through a bunch of interactions and corner case tests in at least one browser, by hand. I effectively did this once, when I was coding all of these JS-using features, but I did it progressively (bit by bit, feature by feature) instead of all at once. And of course I was making sure that my code worked instead of testing that a new version of jQuery is as bug free and compatible as it's expected to be; the former is far more motivating than the latter (which is basically drudge work).

I'm sure there's ways of doing automated tests of client side JavaScript (including jQuery), but two things have kept me away from trying to explore it. First, all through the development of this web app I've been focused on getting the app running instead of building infrastructure like tests; among other things, I was learning as I was going and just learning how to do stuff is hard enough without also trying to learn how to build automated tests for it. Second, the entire thought of automated testing of things involving browsers gives me hives since I'm pretty sure it's going to be complex, require a bunch of infrastructure, and involve a pile of hacks, especially on Unix (I can't see how you can get away from driving a real browser by some form of remote control and I can't see how that can be done at all gracefully).

This is a weakness. I'm not sure it's enough of a weakness to be worth spending the time to fix it, though.

(If I was planning to do much more client side JS programming or if this web app was going to undergo significant more development, things might be different. But as it is I don't see much call for either in the moderate future and there's always a lot of claims on a sysadmin's time.)

programming/OnNotTestingBetas written at 23:29:34; Add Comment

Ubuntu 12.04 and symbolic links in world-writeable sticky-bitted directories

We're in the process of upgrading from Ubuntu 10.04 to Ubuntu 12.04, and today we ran into a serious surprise. If you have a symlink in a world writeable directory that has the sticky bit set so that only the owner of a file can delete it (for example, /tmp), only the owner of a symlink can dereference it. Everyone else will get EACCES on any operation that attempts to do so, including things like attempting to stat() the symlink. If this is happening to you, your Ubuntu kernel will log messages like:

non-matching-uid symlink following attempted in sticky world-writable directory by cat (fsuid 915 != 2315)
yama_inode_follow_link: 16 callbacks suppressed

(If you are testing this, note that the stat program won't trip over it because stat uses the lstat64() system call to look at the symlink itself. cat will fail, as will things like 'test -f <symlink>'.)

This change has alleged security benefits (cf). It also caused an important part of our mail environment to explode, and it is not compatible with historical behavior (both for Linux and for Unix in general). Fortunately you can turn it off; in the Ubuntu 12.04 kernel you need to set /proc/sys/kernel/yama/protected_sticky_symlinks to 0 (instead of the default 1).

(This is also the sysctl kernel.yama.protected_sticky_symlinks.)

The history of this is somewhat tangled. The Linux kernel has a generic idea of 'security modules' (called LSMs) that are used to implement additional security policies over top of the standard Unix permissions; SELinux is the best known LSM, but there are others. In Ubuntu 12.04, Ubuntu has included their own 'Yama' LSM and used it to implement this restriction; a version of Yama that has this was proposed by Ubuntu's Kees Cook as far back as 2010 (also). The standard kernel code also has a (or the) Yama LSM, added at the end of 2011, but it lacks this restriction (and seems to never have had it).

(We only work with Ubuntu LTS releases, so I don't know if Yama and its restrictions appeared in any non-LTS releases between 10.04 (which definitely doesn't have it) and 12.04, or if it's genuinely new with 12.04.)

However, Kees Cook also proposed this restriction as a general patch at the end of 2011. A version of this finally made it into the kernel on July 25th (after 3.5 was released but before 3.6, so this restriction will be in 3.6 when it's released). The actual restrictions are I believe only slightly different (the official kernel code allows the symlink to be followed if it's owned by the owner of the directory), but the important thing that the sysctl to control it has changed. In the official kernel this is controlled by /proc/sys/fs/protected_symlinks (aka fs.protected_symlinks as a sysctl).

(Credit where credit is due: a bunch of these links were found by one of my co-workers while I was busy doing less productive troubleshooting.)

I assume that Ubuntu will drop this restriction from their version of Yama and rely on the standard kernel code when they eventually do an Ubuntu release with a 3.6 or later kernel. At that point, anyone who is turning it off will need to change their sysctl. Hopefully we will remember all of this in a couple of years when the next Ubuntu LTS release happens and this becomes relevant to us again.

I am a grumpy Unix oldtimer and a sysadmin so my opinion on this change is very low. I found especially laughable the idea that Linux kernel people could confidently assert that there was no (legitimate) application that would be affected by this change. Congratulations, you blew up part of our mail system; I guess it doesn't exist.

(Our mail system works in such a way that I believe there are no security risks for us here, but justifying that will take an entire entry.)

Sidebar: another restriction, on hardlinking

Both the Ubuntu Yama LSM and the new standard kernel code for this stuff include a second restriction, this time on what can be hardlinked. In the standard kernel, this is controlled by fs.protected_hardlinks and blocks hardlinks to devices, setuid files, executable setgid files, and files that the UID making the link cannot read or write to (except that the owner of a file is always allowed to make hardlinks to it). Note that this blocks hardlinks to other people's world-readable files if you can't write to them.

The Ubuntu Yama LSM has a similar sysctl in its /proc directory and I suspect that its restrictions are much the same. On Ubuntu 12.04, if this restriction is triggered the kernel will log the message:

non-accessible hardlink creation was attempted by: ln (fsuid 2315)

Since I've just discovered this restriction, I don't know if we'll wind up turning it off on our 12.04 machines.

linux/Ubuntu1204Symlinks written at 01:27:12; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.