2015-06-09
How I use virtual screens in my desktop environment
Like many people, I use a (Unix) desktop environment that supports what gets called 'virtual screens' or 'virtual desktops' (my window manager actually has both). One common approach for using them is to dedicate particular virtual screens to particular programs or groups of programs based on your purpose (this goes especially well with full screen apps). You might have one virtual screen more or less devoted to your mail client, one to your editor or IDE, one to status monitoring of your systems, and so on.
(If you have multiple displays or a big enough display that filling all of it with eg your mail client is absurd, you might reserve the remaining space on your mail desktop as 'overflow' space for browser windows or whatever that you need in the course of dealing with your mail.)
This is not how I've wound up using my virtual screens, at least most of the time. Instead (as you can tell from the nearly empty additional virtual screens on my desktop) I use them primarily as overflow space. Almost all of the time I'm using what I consider my primary virtual screen (the top left one) and everything goes in it. However, some of the time I'm trying to do too many space-consuming things at once, or I just want a cleaner place to do something (often something big), one without all of the usual clutter on my primary virtual screen. That's when I use my additional virtual screens; I switch to a new one, possibly drag some amount of existing windows over from the primary screen, and go for it.
(I'm especially likely to do this if what I want to do is both going to last for a while and take up a bunch of screen space with its window or windows.)
My virtual screens are arranged in a 3 wide by 2 deep grid. Since the easiest screens to use are the ones right by the top left primary screen, the virtual screens one down and one over from it are the usual overflow or temporary work targets. However space consuming long lived stuff tends to get put one screen further away (the screen one down and one over), because this way I keep the more convenient closer screens free for other stuff.
(When we were chasing our OmniOS NFS overload problem, I wound up carpeting this
virtual screen with xterms that were constantly running vmstat
and so on. I wasn't paying any attention to them until the server
locked up, but my use of an xterm feature
meant that I couldn't just iconify them. Anyways, leaving them
open made them easier to keep track of and tell apart, partly
because I'm big on using spatial organization for things.)
I've found it quite handy to have a mouse binding that flips between the current virtual screen and the previous one. That way I can rapidly flip between my primary screen and whatever other virtual screen I'm doing things on. In practice this makes it a lot more convenient to use another virtual screen, because I wind up flipping back to the primary screen for a lot of stuff.
(I often flip back even for stuff that I could do on the new virtual screen just because I 'know' that eg I read mail on the primary screen. I justify this as an anti-distraction measure in that the non-primary screen should not be used for things unrelated to its purpose.)
I have a small amount of things that are permanently there but I don't interact with or look at regularly. These things get exiled off to the very furthest away virtual screen. Typical occupants of this screen are iconified 'master' Firefox and Chrome windows, used purely to keep the browsers running all the time so I have fast access to new Firefox and Chrome windows.
Sidebar: Me and 'dedicated purpose' virtual screens
Although I've never tried to work in the 'dedicated purpose' style of virtual screen usage, I'm pretty sure that I would rapidly get angry at constantly flipping back and forth between eg the mail virtual screen and my working virtual screen. The reality of my computer usage is that I very rarely concentrate on a single thing for a significant time; it's much more common for me to be moving back and forth between any number of activities and fidgets.
If I was a developer I could see this changing, and in fact that would probably be a good thing. Then it would be an advantage that I had to go through the effort to change to the mail screen to check my email, because I'd be that much less likely to interrupt my programming to do so.
2015-06-06
The security danger of exploitable bugs in file format libraries
Lately there have been a raft of security bugs of the form 'the standard open source library for dealing with file format X can be made to do bad things if it opens a specially crafted file in that format'. Some of the time that is 'run arbitrary code'. A bunch of these bugs have been found with the new fuzzer afl; see its 'bug-o-rama' trophy case.
At one level, that these bugs exist in many libraries for handling various file formats is not too surprising. A great deal of them are old libraries, and in general such libraries have generally assumed that they are not being run in any security sensitive context; instead they usually assumed you were using them on your own files and if you want to run arbitrary code as yourself, well, you already can. This can lead to these bugs not seeming too alarming.
There are two reasons to be worried about this today. First, in practice you get a lot of files from other people over the Internet; your browser downloads them (and often tries to display them), your mail client gets them in mail (and often tries to display them), and so on. However this is mostly a desktop risk and is relatively well understood (and many browsers and mail clients are using hardened libraries, although people keep finding new attack points).
Unfortunately there is another risk on Unix systems, and that is smart services that attempt to do content type detection and then content conversion for you. The dangerous poster child for this is the CUPS printer system, but there are probably others out there. In normal default setups, CUPS will try very hard to take random files that users hand it and turn them into something printable. This process involves both questionable content sniffing and, obviously, reading and interpreting all sorts of file formats. CUPS almost certainly uses standard libraries and programs for all of this, which means that exploitable vulnerabilities in these libraries can be used to break into the CUPS user on any system where CUPS is doing these conversions (and CUPS likes doing them on the print server).
(Another possible attack vector is email anti-spam, anti-virus systems. These almost certainly open .zip files using some library and may try to do things like peer inside PDFs and various '* Office' file formats to look for bad things.)
In general we've had a whole parade of troubles with any system that reads attacker-supplied input. We really should be viewing such things with deep suspicion and limiting their deployment, even if it's too late in the case of CUPS.
2015-06-01
The problem with 'what is your data worth?'
Every so often, people who are suggesting that you spend money on something will use the formulation of 'what is your data worth?' or 'your data should be worth ...' (often 'at least this much' follows in some variation). Let's ignore the FUD issues involved and talk only about another problem with this: it puts the cart before the horse by assuming that the data comes first and then the money arrives afterwards. Given data, you are called on to spend as much as required in order to deal with it however people think you're supposed to.
At least around here in the university, this is almost always exactly backwards. In reality the money comes first and we get however much data can fit into it. If not enough data can fit, people will compromise on attributes of the data such as the redundancy level, expensive storage systems with features that are not absolutely essential, and even performance. In extreme cases, people take a deep breath and have less data. What they basically never do is find more money so they can have better storage.
(Sometimes this works in reverse when the costs shift in our favour. Then we wind up with lots of storage and can shift some of the money to better, less compromised features. This is how we went from RAID 5 to RAID 1 storage.)
One part of this is almost certainly that we basically have no ROI. As part of this, the storage we tend to be buying is vague and fuzzy storage without firm metrics for things like performance and durability attached to it. Sure, more performance would be nice, but broadly there's nothing that you can point to to say 'our vital website/database/etc is not running well enough, this must be better'.
(Nor can we establish such metrics out of the air in any meaningful way. Real SLAs must come from business needs because that is the only way that money will be spent in order to satisfy them.)
I suspect that this situation is not entirely unique to us and universities. Businesses undertake any number of 'would be nice to have' things, and also they ultimately have constraints on how much money they can spend on even important things.
PS: there are limits on this 'any performance is acceptable', of course, but they tend to be comparatively way out in left field. Fundamentally there is no magic pot of money that we can get if we just make big enough puppydog eyes, so getting significantly more money basically needs to be a situation where there is a clear and pressing problem that is obvious to everyone, whether that is space or performance or redundancy or whatever.