Wandering Thoughts archives

2016-07-30

The perils of having an ancient $HOME (a yak shaving story)

I build Firefox from source for obscure reasons, and for equally obscure reasons I do this from the Firefox development repo instead of, say, the source for the current release. Mozilla has recently been moving towards making Rust mandatory for building Firefox (see especially the comments). If you build the Mercurial tip from source this is currently optional, but I can see the writing on the wall so I decided to get a copy of Rust and turn this option on. If nothing else, I could file bugs for any problems I ran into.

Getting a copy of Rust went reasonably well (although Rust compiles itself painfully slowly) and I could now build the Rust component without issues. Well, mostly without issues. During part of building Firefox, rustc (the Rust compiler) would print out:

[...]
librul.a
note: link against the following native artifacts when linking against this static library
note: the order and any duplication can be significant on some platforms, and so may need to be preserved
note: library: dl
note: library: pthread
note: library: gcc_s note: library: c
note: library: m
note: library: rt
note: library: util
libxul_s.a.desc
libxul.so
Exporting extension to source/test/addons/author-email.xpi.

[...]

Everything from the first 'note:' onwards was in bold, including later messages (such as 'Exporting ...') that were produced by other portions of the build process. Clearly rustc was somehow forgetting to tell the terminal to turn off boldface, in much the same way that people writing HTML sometimes forget the '</b>' in their markup and turn the rest of the page bold.

This only happened in xterm; in gnome-terminal and other things, rustc turned off bold without problems. This didn't surprise me, because almost no one still uses and thus tests with xterm these days. Clearly there was some obscure detail about escape sequence handling that xterm was doing differently from gnome-terminal and other modern terminal emulators, and this was tripping up rustc and causing it to fail to close off boldface.

(After capturing output with script, I determined that rustc was both turning on bold and trying to turn it off again with the same sequence, 'ESC [ 1 m'. Oh, I said to myself, this has clearly turned into a toggle in modern implementations, but xterm has stuck grimly to the old ways and is making you do it properly. This is the unwonted, hubristic arrogance of the old Unix hand speaking,.)

Since life is short and my patience is limited, I dealt with this simply; I wrote a cover script for rustc that manually turned off bold (and in fact all special terminal attributes) afterwards. It was roughly:

#!/bin/sh
rustc.real "$@"
st=$?; tput sgr0; exit $st

This worked fine. I could still build Firefox with the Rust bits enabled, and my xterm no longer became a sea of boldface afterwards.

Today I ran across an announcement of git-series, an interesting looking git tool to track the evolution of a series of patches over time as they get rebased and so on. Since this is roughly how I use git to carry my own patches on top of upstream repos, I decided I was interested enough to take a look. Git-series is written in Rust, which is fine because I already had that set up, but it also requires Rust's package manager Cargo. Cargo doesn't come with Rust's source distribution; you have to get it and build it separately. Cargo is of course built in Rust. So I cloned the Cargo repo and started building.

It didn't go well. In fact it blew up spectacularly and mysteriously right at the start. Alex Crichton of the Rust project took a look at my strace output and reported helpfully that something seemed to be adding a stray ESC byte to rustc's output stream when the build process ran it and tried to parse the output.

Oh. Well. That would be the tput from my cover script, wouldn't it. I was running tput even when standard output wasn't a terminal and this was mucking things up for anything that tried to consume rustc output. That fixed my issue with Cargo, but now I wanted to get this whole 'doesn't turn off boldface right' issue in Rust fixed, so I started digging to at least characterize things so I could file a bug report.

In the process of poking and prodding at this, a little bit of my strace output started nagging at me; there had been references in it to /u/cks/.terminfo. I started to wonder if I had something outdated in my personal environment that was confusing Rust's terminfo support library into generating the wrong output. Since I couldn't remember anything important I had set up there, I renamed it to .terminfo-hold and re-tested.

Magically, everything now worked fine. Rustc was happy with life.

What did I have in .terminfo? Well, here:

; ls -l /u/cks/.terminfo-hold/x/xterm
-rw-r--r--. 2 cks cks 1353 Feb 10  1998 .terminfo-hold/x/xterm

Here in 2016, I have no idea why I needed a personal xterm terminfo file in 1998, or what's in it. But it's from 1998, so I'm pretty confident it's out of date. If it was doing something that I'm going to miss, I should probably recreate it from a modern xterm terminfo entry.

(This also explains why gnome-terminal had no problems; it normally has a $TERM value of xterm-256color, and I didn't have a custom terminfo file for that.)

My $HOME is really old by now, and as a result it has all sorts of ancient things lingering on in its dotfile depths. Some of them are from stuff I configured a long time ago and have forgotten since, and some of them are just the random traces from ancient programs that I haven't used for years. Clearly this has its hazards.

(Despite this experience I have no more desire to start over from scratch than I did before. My environment works and by now I'm very accustomed to it.)

AncientHOMEPerils written at 00:38:27; Add Comment

2016-07-15

Sudo and changes in security expectations (and user behaviors)

Sudo has a well known, even famous default behavior; if you try to use sudo and you don't have sudo privileges, it sends an email alert off to the sysadmins (sometimes these are useful). In my view, this is the sign of a fundamental assumption in sudo's security model, namely that it's only going to be used by authorized people or by malicious parties. If you're not a sysadmin or an operator or so on, you know that you have no business running sudo, so you don't. Given the assumption that unauthorized people don't innocently run sudo, it made sense to send alert email about it by default.

Once upon a time that security model was perfectly sensible, back in the days when Unix machines were big and uncommon and theoretically always run by experienced professionals. Oh, sure, maybe the odd sysadmin or operator would accidentally run sudo on the wrong machine, but you could expect that ordinary people would never touch it. Today, however, those days are over. Unix machines are small and pervasive, there are tons of people who have some involvement in sysadmin things on one, and sudo has been extremely successful. The natural result is that there are a lot of people out there who are following canned howto instructions without really thinking about them, and these instructions say to use sudo to get things done.

(Sometimes the use of sudo is embedded into an installation script or the like. The Let's Encrypt standard certbot-auto script works this way, for instance; it blithely uses sudo to do all sorts of things to your system without particularly warning you, asking for permission, or the like.)

In other words, the security model that there's basically no innocent unauthorized use of sudo is now incorrect, at least on multi-user Unix systems. There are plenty of such innocent attempts, and in some environments (such as ours) they're the dominant ones. Should this cause sudo's defaults to change? That I don't know, but the pragmatic answer is that in the grand Unix tradition, leaving the defaults unchanged is easier.

(There remain Unix environments where there shouldn't be any such unauthorized uses, of course. Arguably multi-user Unix environments are less common now than such systems, where you very much do want to get emailed if, eg, the web server UID suddenly tries to run sudo.)

SudoAndSecurityAssumptions written at 01:03:59; Add Comment

2016-07-02

cal's unfortunate problem with argument handling

Every so often I want to see a calendar just to know things like what day of the week a future date will be (or vice versa). As an old Unix person, my tool for this is cal. Cal is generally a useful program, but it has one unfortunate usage quirk that arguably shows a general issue with Unix style argument handling.

By default, cal just shows you the current month. Suppose that you are using cal at the end of June, and you decide that you want to see July's calendar. So you absently do the obvious thing and run 'cal 7' (because cal loves its months in decimal form). This does not do what you want; instead of seeing the month calendar for July of this year, you see the nominal full year calendar for AD 7. To see July, you need to do something like 'cal 7 2016' or 'cal -m 7'.

On the one hand, this is regrettably user hostile. 'cal N' for N in the range of 1 to 12 is far more likely to be someone wanting to see the given month for the current year than it is to be someone who wants to see the year calendar for AD N. On the other hand, it's hard to get out of this without resorting to ugly heuristics. It's probably equally common to want a full year calendar from cal as it is to want a different month's calendar, and both of these operations would like to lay claim to the single argument 'cal N' invocation because that's the most convenient way to do it.

If we were creating cal from scratch, one reasonably decent option would be to declare that all uses of cal without switches to explicitly tell it what you wanted were subject to heuristics. Then cal would have a license to make 'cal 7' mean July of this year instead of AD 7, and maybe 'cal 78' mean 'cal 1978' (cf the note in the V7 cal manpage). If you really wanted AD 7's year calendar, you'd give cal a switch to disambiguate the situation; in the mean time, you'd have no grounds for complaint. But however nice it might be, this would probably strike people as non-Unixy. Unix commands traditionally have predictable argument handling, even if it's not friendly, because that's what Unix considers more important (and also easier, if we're being honest).

In a related issue, I have now actually read the manpages for modern versions of cal (FreeBSD and Linux use different implementations) and boy has it grown a lot of options by now (options that will probably make my life easier if I can remember them and remember to use them). Reassuringly, the OmniOS version of cal still takes no switches; it's retained the V7 'cal [[month] year]' usage over all of these years.

CalUnfortunateArguments written at 23:39:50; Add Comment

2016-06-04

The (Unix) shell is not just for running programs

In the Reddit comments on yesterday's entry, I ran across the following comment:

No. The shell literally has the sole purpose of running external programs. Anything more is extra.

The V1 shell read a line, split on whitespace, and executed the command from /bin. You could change the current directory from in the shell, that was it.

On any version of Unix as far back as at least V7, this is false. The Unix shell may have started out simply being a way to run programs, but it long ago stopped being just that. Since the V7 shell is a ground up rewrite, one cannot even argue that the shell simply drifted into these additional features for convenience. The V7 shell was consciously designed from scratch, and as part of that design it included major programming features including control flow constructs drawn directly from the general Algol line of computer language design. Inclusion of these programming features is not an accident and not a drift over time; it is a core part of the shell's design and thus its intended purpose. The V7 shell is there both to run programs and to write programs (shell scripts), and this is completely intended.

(In terms of control flow, I'm thinking here of if, while, and for, and there's also case.)

In short, the shell as in part a programming language is part of Unix's nature from at least the first really popular Unix version (V7 became the base of many further lines of Unix). To the extent that the Unix design ethos or philosophy exists as a coherent thing, it demonstrably includes a strongly programmable shell.

You can make an argument that the V6 shell (the 'Mashey shell') shows this too, but it was apparently a derivative of and deliberately backwards compatible with the original 'just run things' Thompson shell. The V7 Bourne shell is a clear, from scratch break with the original Thompson shell, and it was demonstrably accepted by Research Unix as being, well, proper Unix.

(If you want even more proof that Research Unix's view of the shell includes programming, the shell was reimplemented once again for Version 10 and Plan 9 in the form of Tom Duff's rc shell and, you guessed it, that included programmability too, this time with more C-like syntax instead of the Algol-like syntax of the Bourne shell.)

(You can argue that this conjoining of 'just run programs for people' and 'write shell scripts' in a single program is a mistake and these roles should be split apart into two programs, but that's a different argument. I happen to think that it's also wrong, and on more than one level.)

ShellNotJustProgramRunner written at 01:28:04; Add Comment

2016-06-03

One thing that makes the Bourne shell an odd language

In many ways, the Bourne shell is a relatively conventional programming language. It has a few syntactic abnormalities, a few flourishes created by the fact that it is an engine for running programs (although other languages have featured equivalents of $(...) in the form of various levels of 'eval' functionality), and a different treatment of unquoted words, but the overall control structure is an extremely familiar Algol-style one (which is not surprising, since Steve Bourne really liked Algol).

But the Bourne shell does have one thing that clearly makes it an odd language, namely that it has outsourced what are normally core language functions to external programs. Or rather it started out in its original version by outsourcing those functions; versions of the Bourne shell since then have pulled them back in in various ways. Here I am thinking of both evaluating conditionals via test aka [ and arithmetic via expr (which also does some other things too).

(Bourne shells have had test as a builtin for some time (sometimes with some annoyances) and built in arithmetic is often present these days as $...)

There's no reason why test has to be a separate program and neither test nor expr seems to have existed in Research Unix V6, so they both appeared in V7 along with the Bourne shell itself. They aren't written in BourneGol, so they may not have been written by Steve Bourne himself, but at least test was clearly written as a companion program (the V7 Bourne shell manpage explicitly mentions it, among other things).

I don't know why the original Bourne shell made this decision. It's possible that it was simply forced by the limitations of the PDP-11 environment of V7. Maybe a version of the Bourne shell with test and/or expr built into the main shell code would have either been too big or just considered over-bloated for something that would mostly be used interactively (and thus not be using test et al very often). Or possibly they were just easier to write as separate programs (the V7 expr is just a single yacc file).

Note that there are structural reasons in the Bourne shell to make if et al conditions be the result of commands, instead of restricting them to (only) be actual conditions. But the original Bourne shell could have done this with test or the equivalent as a built-in command, and it certainly has other built in commands. Perhaps test needing to be an actual command was one of the things that pushed it towards not being built in. You can certainly see a spirit of minimalism at work here if you want to (although I have no idea if that's the reason).

(This expands on a tweet of mine.)

Sidebar: It's not clear when test picked up its [ alias

Before I started writing this entry, I expected that test was also known as [ right from the beginning in V7. Now I'm not so sure. On the one hand, the actual V7 shell scripts I can find eg here consistently use test instead of [ and the V7 compile scripts don't seem to create a [ hardlink. On the other hand, the V7 test source already has special magic handling if it's invoked as [.

(There are V7 disk images out there that you can boot up on a PDP-11 emulator, so in theory I could fire one up and see if it has a /bin/[. In practice I'm not that energetic.)

BourneShellOutsourcedBits written at 01:25:41; Add Comment

2016-05-30

'Command line text editor' is not the same as 'terminal-based text editor'

A while back, I saw a mention about what was called a new command line text editor. My ears perked up, and then I was disappointed:

Today's irritation: people who say 'command line text editor' when they mean 'terminal/cursor-based text editor'.

I understand why the confusion comes up, I really do; an in-terminal full screen editor like vi generally has to be started from the command line instead of eg from a GUI menu or icon. But for people like me, the two are not the same and another full screen, text based editor along the lines of vi (or nano or GNU Emacs without X) is not anywhere near as interesting as a new real command line text editor is (or would be).

So, what do people like me mean by 'command line text editor'? Well, generally some form of editor that you use from the command line but that doesn't take over your terminal screen and have you cursor around it and all that. The archetype of interactive command line text editors is ed, but there are other editors which have such a mode (sam has one, for example, although it's not used very much in practice).

Now, a lot of the nominal advantages of ed and similar things are no longer applicable today. Once upon a time they were good for things like low bandwidth connections where you wanted to make quick edits, or slow and heavily loaded machines where you didn't want to wait for even vi to start up and operate. These days this is not something that most people worry about, and full screen text editors undeniably make life easier on you. Paradoxically, this is a good part of why I would be interested in a new real command line editor. Anyone who creates one in this day and age probably has something they think it does really well to make up for not being a full screen editor, and I want to take a look at it to see this.

I also think that there are plausible advantages of a nice command line text editor. The two that I can think of are truly command line based editing (where you have commands or can easily build shell scripts to do canned editing operations, and then you invoke the command to do the edit) and quick text editing in a way that doesn't lose the context of what's already on your screen. I imagine the latter as something akin to current shell 'readline' command line editing, which basically uses only a line or two on the screen. I don't know if either of these could be made to work well, but I'd love to see someone try. It would certainly be different from what we usually get.

(I don't consider terminal emulator alternate screens to be a solution to the 'loss of context' issue, because you still can't see the context at the same time as your editing. You just get it back after you quit your editor again.)

CommandLineTextEditors written at 00:16:19; Add Comment

2016-05-13

You can call bind() on outgoing sockets, but you don't want to

It started with some tweets and some data by Julia Evans. In the data she mentions:

and here are a few hundred lines of strace output. What's going on? it is running bind() all the time, but it's making outgoing HTTP connections. that makes no sense!!

It turns out that this is valid behavior according to the Unix API, but you probably don't want to do this for a number of reasons.

First off, let's note more specifically what Erlang is doing here. It is not just calling bind(), it is calling bind() with no specific port and address picked:

bind(18, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("0.0.0.0")}, 16 <unfinished ...>

Those arguments are INADDR_ANY and the magic port (port 0) that tells bind() that it can pick any ephemeral port. What this bind() does is assign the socket a local ephemeral port (from whatever the ephemeral port range is). Since we specified the local address as INADDR_ANY, the socket remains unbound to any specific local IP; the local IP will only be chosen when we connect() the socket to some address.

(This is visible in anything that exposes the Unix socket API and has a getsockname() operation. I like using Python, since I can do all of this from an interactive REPL.)

There really isn't very much point in doing this for sockets that you're going to use for outgoing connections; about all it achieves is letting you know your local port before you make the connection, instead of only afterwards. In exchange for this minor advantage you make one extra system call and also increase your chances of running out of ephemeral ports under load, because you're putting an extra constraint on the kernel's port allocation.

In general, IP requires each connected socket to have a unique tuple of (local IP, local port, remote IP, report port). When you leave an outgoing socket unbound until you connect(), the kernel has the maximum freedom to find a local port that makes the tuple unique, because all it needs is one of the four things to be unique, not necessarily the local port number. If you're connecting to different ports on a remote server, the same port on different remote servers, or whatever, it may be able to reuse a local port number that's already been used for something else. By contrast, if you bind() before you connect and use INADDR_ANY, the kernel pretty much has the minimum freedom; it must ensure that the local port alone is completely unique, so that no matter what you then do with listen() and connect() later you'll never collide with an existing tuple.

(See this article for a discussion of how the Linux kernel does this, and in fact this entire issue.)

Some Unixes may frontload all of the checks necessary into bind(), but at least some of them defer some checks to connect(), even for pre-bound sockets. This is probably a sensible decision, especially since a normal connect() can fail because of ephemeral port exhaustion.

I'm sure there's some advantage to this 'bind before connect' approach, but I'm honestly hard pressed to think of any.

(There are situations where you want to bind to a specific IP address, but that's not what's happening here.)

(I sort of always knew that it was valid to bind() before calling connect() but I didn't know the details, so writing this has been useful. For instance, before I started writing I thought maybe the bind() picked an IP address as well as the ephemeral port, which turns out not to be the case; it leaves the IP address unbound. Which is really not that surprising once I think about it, since that's what you often do with servers; they listen to a specific port on INADDR_ANY. All of which goes to show that sometimes it's easier for me to experiment and find out things rather than reason through them from first principles.)

BindingOutgoingSockets written at 02:08:18; Add Comment

2016-05-11

The difference between 'Delete' and 'Destroy' in X window managers

If you use a certain sort of window manager (generally an old school one), you may have discovered that there are two operations the window manager supports to tell X applications to close themselves. Names for these operations vary, but I will go with 'Delete' and 'Destroy' for them because this is fvwm's terminology. Perhaps you've wondered why there are two ways to do this, or what the difference is. As you might guess, the answers are closely tied to each other.

The one way to describe the difference between the two operations is who takes action. When your window manager performs a 'Delete' operation, what it is really doing is sending a message to the application behind the selected window saying 'please close yourself' (specifically it is sending a WM_DELETE_WINDOW message). This message and the whole protocol around it is not built in to the X server and the wire protocol; instead it is an additional system added on top.

(Potentially confusingly, this 'please close your window' stuff is also called a protocol. People who want to see the details can see the XSetWMProtocols() manpage and read up on client messages. See eg this example, and also the ICCCM.)

On the other hand, the 'Destroy' operation talks directly to the X server to say either 'destroy this window' or more commonly 'terminate the connection of the client responsible for this window'. Unlike 'Delete', this requires no cooperation from the client and will work even if the client has hung or is deciding to ignore your request to close a window (perhaps the client believes it's just that special).

Generally there are big side effects of a 'Destroy'. Since most programs only create a single connection to the X server, having the server close this connection will close all of the program's windows and generally cause it to immediately exit. Even if only a single window is destroyed, this usually causes the program to get X errors and most programs exit the moment they get an X error, which of course closes all of the program's windows.

How programs react to being asked politely to close one of their windows varies, but usually if a program has multiple windows it won't exit entirely, just close the specific window you asked it to. Partly this is because the 'close window' button on the window frame is actually doing that 'Delete' operation, and very few people are happy if a program exits entirely just because you clicked the little X for one of its windows.

Because 'Delete' is a protocol that has to be handled by the client and some clients are just that old, there are or at least were a few core X clients that didn't support it. And if you write an X client from scratch using low-level libraries, you have to take explicit steps to support it and you may not have bothered.

(To be friendly, fvwm supports a 'Close' operation that internally checks to see whether a window supports 'Delete' and uses that if possible; for primitive clients, it falls back to 'Destroy'. I suspect that many window managers support this or just automatically do it, but I haven't looked into any to be sure.)

Sidebar: 'Delete' and unresponsive clients

It would be nice if your window manager could detect that it's trying to send a 'Delete' request to a client that theoretically supports it but isn't responding to it, and perhaps escalate to a Destroy operation. Unfortunately I don't know if the window manager gets enough information from the X server to establish that the client is unresponsive, as opposed to just not closing the window, and there are legitimate situations where the client may not close the window itself right away, or ever.

(Consider you trying to 'Delete' a window with unsaved work. The program involved probably wants to pop up a 'do you want to save this?' dialog, and if you then ignore the dialog everything will just sit there. And if you click on 'oops, cancel that' the whole situation will look much like the program is ignoring your 'Delete' request.)

I believe that some window managers do attempt to detect unresponsive clients, but at most they pop up a dialog offering you the option to force-close the window and/or client. Others, such as fvwm, just leave it entirely to you.

XDeleteVersusDestroy written at 03:19:16; Add Comment

2016-05-02

The state of supporting many groups over NFS v3 in various Unixes

One of the long standing limits with NFSv3 is that the protocol only uses 16 groups; although you can be in lots of groups on both the client and the server, the protocol itself only allows the client to tell the server about 16 of them. This is a real problem for places (like us) who have users who want or need to be in lots of groups for access restriction reasons.

For a long time the only thing you could was shrug and work around this by adding and removing users from groups as their needs changed. Fortunately this has been slowly changing, partly because people have long seen this as an issue. Because the NFS v3 protocol is fixed, everyone's workaround is fundamentally the same: rather than taking the list of groups from the NFS request itself, the NFS server looks up what groups the user is in on the server.

(In theory you could merge the local group list with the request's group list, but I don't think anyone does that; they just entirely overwrite the request.)

As far as I know, the current state of affairs for various Unixes that we care about runs like this:

I care about how widespread the support for this is because we've finally reached a point where our fileservers all support this and so we could start putting people in more than 16 groups, something that various parties are very much looking forward to. So I wanted to know whether officially adding support for this would still leave us with plenty of options for what OS to run on future fileservers, or whether this would instead be a situation more like ACLs over NFS. Clearly the answer is good news; basically anything we'd want to use as a fileserver OS supports this, even the unlikely candidate of Oracle Solaris.

(I haven't bothered checking out the state of support for this on the other *BSDs because we're not likely to use any of them for an NFS fileserver. Nor have I looked at the state of support for this on dedicated NFS fileserver appliances, because I don't think we'll ever have the kind of budget or need that would make any of them attractive. Sorry, NetApp, you were cool once upon a time.)

NFSManyGroupsState written at 00:46:00; Add Comment

2016-04-17

Why Unix needs a standard way to deal with the file durability problem

One of the reactions to my entry on Unix's file durability problem is the obvious pragmatic one. To wit, that this isn't really a big problem because you can just look up what you need to do in practice and do it (possibly with some debate over whether you still need to fsync() the containing directory to make new files truly durable or whether that's just superstition by now). I don't disagree with this pragmatic answer and it's certainly what you need to do today, but I think to stick to it is to not see why Unix as a whole should have some sort of agreed on standard for this.

An agreed on standard would help both programmers and kernel developers. On the side of user level programmers, it tells us not just what we need to do in order to achieve file durability today but also what we need to do in order to future-proof our code. A standard amounts to a promise that no sane future Unix setup will add an additional requirement for file durability. If our code is working right today on Solaris UFS or Linux ext2, it will keep working right tomorrow on Linux ext4 or Solaris ZFS. Without a standard, we can't be sure about this and in fact some programs have been burned by it in the past, when new filesystems added extra requirements like fsync()'ing directories under some circumstances.

(This doesn't mean that all future Unix setups will abide by this, of course. It just means that we can say 'your system is clearly broken, this is your problem and not a fault in our code, fix your system setup'. After all, even today people can completely disable file durability through configuration choices.)

On the side of kernel people and filesystem developers, it tells both parties how far a sensible filesystem can go; it becomes a 'this far and no further' marker for filesystem write optimization. Filesystem developers can reject proposed features that break the standard as 'it breaks the standard', and if they don't the overall kernel developers can. Filesystem development can entirely avoid both a race to the bottom and strained attempts to read the POSIX specifications so as to allow ever faster but more dangerous behavior (and also the ensuing arguments over just how one group of FS developers read POSIX).

The whole situation is exacerbated because POSIX and other standards have so relatively little to say on this. The people who create hyper-aggressive C optimizers are at least relying on a detailed and legalistically written C standard (even if almost no programs are fully conformant to it in practice), and so they can point users to chapter and verse on why their code is not standards conforming and so can be broken by the compiler. The filesystem people are not so much on shakier ground as on fuzzy ground, which results in much more confusion, disagreement, and arguing. It also makes it very hard for user level programmers to predict what future filesystems might require here, since they have so little to go from.

WhyFileSyncStandardNeeded written at 01:45:33; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.