Wandering Thoughts archives

2018-07-30

My own configuration files don't have to be dotfiles in $HOME

Back when I started with Unix (a long time ago), programs had a simple approach to where to look for or put little files that they needed; they went into your $HOME as dotfiles, or if the program was going to have a bunch of them it might create a dot-directory for itself. This started with shells (eg $HOME/.profile) and spread steadily from there, especially for early open source programs. When I started writing shell scripts, setup scripts for my X environment, and other bits and pieces that needed configuration files or state files, the natural, automatic thing to do was to imitate this and put my own dotfiles and dot-directories in my $HOME. The entirely unsurprising outcome of this is that my home directories have a lot of dotfiles (some of them very old, which can cause problems). How many is a lot? Well, in my oldest actively used $HOME, I have 380 of them.

(Because dotfiles are normally invisible, it's really easy for them to build up and build up to absurd levels. Not that my $HOME is neat in general, but I have many fewer non-dotfiles cluttering it up.)

Recently it slowly dawned on me that my automatic reflex to put things in $HOME as dotfiles is both not necessary and not really a good idea. It's not necessary because I can make my own code look wherever I want it to, and it's not a good idea because $HOME's dotfiles are a jumbled mess where it's very hard to keep track of things or even to see them. Instead I'm better off if I put my own files in non-dotfile directory hierarchies somewhere else, with sensible names and sensible separation into different subdirectories and all of that.

(I'm not quite sure when and why this started to crystalize for me, but it might have been when I was revising my X resources and X setup stuff on my laptop and realized that there was no particular reason to put them in _$HOME/.X<something> the way I had on my regular machines.)

I'm probably not going to rip apart my current $HOME and its collection of dotfiles. Although the idea of a scorched earth campaign is vaguely attractive, it'd be a lot of hassle for no visible change. Instead, I've decided that any time I need to make any substantial change to things that are currently dotfiles, I'll take the opportunity to move them out of $HOME.

(The first thing I did this with was my X resources, which had to change on my home machine due to a new and rather different monitor. Since I was basically gutting them to start with, I decided it made no sense to do it in place in $HOME.)

PS: Modern Unix (mostly Linux) has the XDG Base Directory Specification, which tries to move a lot of things under $HOME/.config, $HOME/.local/share, and $HOME/.cache. In theory I could move my own things under there too. In practice I'm not particularly interested in hiding them away that way; I'd rather put them somewhere more obvious, such as $HOME/share/X11/resources.

MovingOutOfHOME written at 21:36:41; Add Comment

2018-07-16

Why people are probably going to keep using today's Unixes

A while back I wrote about how the value locked up in the Unix API makes it durable. The short version is that there's a huge amount of effort and thus value invested in both the kernels (that provide one level of the Unix API) and in all of the programs and tools and systems that run on top of them, using the Unix APIs. If you start to depart from this API you start to lose access to all of those things.

The flipside of this is why I think people are probably going to keep using current Unixes in the future instead of creating new Unix-like OSes or Unix OSes. To a large extent, the potential value in departing from current Unixes lies in doing things differently at some API level, and once you depart from the API you're fighting the durable power of the Unix API. If you don't depart from the Unix API, it's hard to see much of a point; 'we wrote a different kernel but we still support all of the Unix API' (and variants) don't appear to have all that high a value. You're spending a lot of effort to wind up in essentially the same place.

(There was a day when you could argue that current Unix kernels and systems were fatally flawed and you could make important improvements. Given how well they work today and how much effort they represent, that argument is no longer very convincing. Perhaps we could do better, but can we do lots better, enough to justify the cost?)

In one way this is depressing; it means that the era of many Unixes and many Unix-like OSes flourishing is over. Not only is the cost of departing from Unix too high, but so is the cost of reimplementing it and possibly even keeping up with the leading implementations. The Unixes we have today are likely to be the only Unixes we ever have, and probably not all of them are going to survive over the long term (and that's apart from the commercial ones that are on life support today, like Solaris).

(This isn't really a new observation; Rob Pike basically made it a long time ago in the context of academic systems software research (see the mention in this entry).)

But this doesn't mean that innovation in Unix and the Unix API is dead; it just means that it has to happen in a different way. You can't drive innovation by creating a new Unix or Unix-like, but you can drive innovation by putting something new into a Unix that's popular enough, so it becomes broadly available and people start taking advantage of it (the obvious candidate here is Linux). It's possible that OpenBSD's pledge() will turn out to be such an innovation (whether other Unixes implement it as a system call or as a library function that uses native mechanisms).

(Note that not all attempts to extend or change the practical Unix API turn out to be good ideas over the long term.)

It also doesn't always mean that what we wind up with is really 'Unix' in a conventional sense. One thing that's already happening is that an existing Unix is used as the heart of something that has custom layers wrapped around it. Android, iOS, and macOS are all versions of this; they have a core layer that uses an existing Unix kernel and so on but then a bunch of things specific to themselves on top. These systems have harvested what they find to be the useful value of their Unix and then ignored the rest of it. Of course all of them represent a great deal of effort in their custom components, and they wouldn't have happened if the people involved couldn't extract a lot of value from that additional work.

(This extends my other tweet from the time of the first entry.)

DurableCurrentUnixes written at 23:42:28; Add Comment

By day for July 2018: 16 30; before July; after July.

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.