Wandering Thoughts

2019-04-19

V7 Unix programs are often not written the way you would expect

Yesterday I wrote that V7 ed read its terminal input in cooked mode a line at a time, which was an efficient, low-CPU design that was important on V7's small and low-power hardware. Then in comments, frankg pointed out that I was wrong about part of that, namely about how ed read its input. Here, straight from the V7 ed source code, is how ed read input from the terminal:

getchr()
{
	[...]
	if (read(0, &c, 1) <= 0)
		return(lastc = EOF);
	lastc = c&0177;
	return(lastc);
}

gettty()
{
	[...]
	while ((c = getchr()) != '\n') {
	[...]
}

(gettty() reads characters from getchr() into a linebuf array until end of line, EOF, or it runs out of space.)

In one way, this is surprising; it's very definitely not how we'd write this today, and if you did, many Unix programmers would immediately tell you that you're being inefficient by making so many calls to read() and you should instead use a buffer, for example through stdio's fgets(). Very few modern Unix programs do character at a time reads from the kernel, partly because on modern machines it's not very efficient.

(It may have been comparatively less inefficient on V7 on the PDP-11, if for example the relative cost of making a system call was lower than it is today. My impression is that this may have been the case.)

V7 had stdio in more or less its modern form, complete with fgets(). V6 had a precursor version of stdio and buffered IO (see eg the manpage for getc()). However, many V7 and V6 programs didn't necessarily use them; instead they used more basic system calls. This is one of the things that often gives the code for early Unix programs (V7 and before) an usual feel, along with the short variable names and the lack of comments.

The situation with ed is especially interesting, because in V5 Unix, ed appears to have still been written in assembly; see ed1.s, ed2.s, and ed3.s here in 's1' of the V5 sources. In V6, ed was rewritten in C to create ed.c (still in a part of the source tree called 's1'), but it still used the same read() based approach that I think it used in the assembly version.

(I haven't looked forward from V7 to see if later versions were revised to use some form of buffering for terminal input.)

Sidebar: An interesting undocumented ed feature

Reading this section of the source code for ed taught me that it has an interesting, undocumented, and entirely characteristic little behavior. Officially, ed commands that have you enter new text have that new text terminate by a . on a line by itself:

$ ed newfile
a
this is new text that we're adding.
.

This is how the V7 ed manual documents it and how everyone talks about. But the actual ed source code implements this on input is, from that gettty() function:

if (linebuf[0]=='.' && linebuf[1]==0)
        return(EOF);
return(0);

In other words, it turns a single line with '.' into an EOF. The consequence of this is that if you type a real EOF at the start of a line, you get the same result, thus saving you one character (you use Control-D instead of '.' plus newline). This is very V7 Unix behavior, including the lack of documentation.

This is also a natural behavior in one sense. A proper program has to react to EOF here in some way, and it might as well do so by ending the input mode. It's also natural to go on to try reading from the terminal again for subsequent commands; if this was a real and persistent EOF, for example because the pty closed, you'll just get EOF again and eventually quit. V7 ed is slightly unusual here in that it deliberately converts '.' by itself to EOF, instead of signaling this in a different way, but in a way that's also the simplest approach; if you have to have some signal for each case and you're going to treat them the same, you might as well have the same signal for both cases.

Modern versions of ed appear to faithfully reimplement this convenient behavior, although they don't appear to document it. I haven't checked OpenBSD, but both FreeBSD ed and GNU ed work like this in a quick test. I haven't checked their source code to see if they implement it the same way.

EdV7CodedUnusually written at 23:49:59; Add Comment

2019-04-18

One reason ed(1) was a good editor back in the days of V7 Unix

It is common to describe ed(1) as being line oriented, as opposed to screen oriented editors like vi. This is completely accurate but it is perhaps not a complete enough description for today, because ed is line oriented in a way that is now uncommon. After all, you could say that your shell is line oriented too, and very few people use shells that work and feel the same way ed does.

The surface difference between most people's shells and ed is that most people's shells have some version of cursor based interactive editing. The deeper difference is that this requires the shell to run in character by character TTY input mode, also called raw mode. By contrast, ed runs in what Unix usually calls cooked mode, where it reads whole lines from the kernel and the kernel handles things like backspace. All of ed's commands are designed so that they work in this line focused way (including being terminated by the end of the line), and as a whole ed's interface makes this whole line input approach natural. In fact I think ed makes it so natural that it's hard to think of things as being any other way. Ed was designed for line at a time input, not just to not be screen oriented.

(This was carefully preserved in UofT ed's very clever zap command, which let you modify a line by writing out the modifications on a new line beneath the original.)

This input mode difference is not very important today, but in the days of V7 and serial terminals it made a real difference. In cooked mode, V7 ran very little code when you entered each character; almost everything was deferred until it could be processed in bulk by the kernel, and then handed to ed all in a single line which ed could also process all at once. A version of ed that tried to work in raw mode would have been much more resource intensive, even if it still operated on single lines at a time.

(If you want to imagine such a version of ed, think about how a typical readline-enabled Unix shell can move back and forth through your command history while only displaying a single line. Now augment that sort of interface with a way of issuing vi-like bulk editing commands.)

This is part of why I feel that ed(1) was once a good editor (cf). Ed is carefully adapted for the environment of early Unixes, which ran on small and slow machines with limited memory (which led to ed not holding the file it's editing in memory). Part of that adaptation is being an editor that worked with the system, not against it, and on V7 Unix that meant working in cooked mode instead of raw mode.

(Vi appeared on more powerful, more capable machines; I believe it was first written when BSD Unix was running on Vaxes.)

Update: I'm wrong in part about how V7 ed works; see the comment from frankg. V7 ed runs in cooked mode but it reads input from the kernel a character at a time, instead of in large blocks.

EdDesignedForCookedInput written at 23:25:56; Add Comment

2019-03-13

Peculiarities about Unix's statfs() or statvfs() API

On modern Unixes, the official interface to get information about a filesystem is statvfs(); it's sufficiently official to be in the Single Unix Specification as seen here. On Illumos it's an actual system call, statvfs(2). On many other Unixes (at least Linux, FreeBSD, and OpenBSD)), it's a library API on top of a statfs(2) system call ([[Linux, FreeBSD, OpenBSD). However you call it and however it's implemented, the underlying API of the information that gets returned is a little bit, well, peculiar, as I mentioned yesterday.

(In reality the API is more showing its age than peculiar, because it dates from the days when filesystems were simpler things.)

The first annoyance is that statfs() doesn't return the number of 'files' (inodes) in use on a filesystem. Instead it returns only the total number of inodes in the filesystem and the number of inodes that are free. On the surface this looks okay, and it probably was back in the mists of time when this was introduced. Then we got more advanced filesystems that didn't have a fixed number of inodes; instead, they'd make as many inodes as you needed, provided that you had the disk space. One example of such a filesystem is ZFS, and since we have ZFS fileservers, I've had a certain amount of experience with the results.

ZFS has to answer statfs()'s demands somehow (well, statvfs(), since it originated on Solaris), so it basically makes up a number for the total inodes. This number is based on the amount of (free) space in your ZFS pool or filesystem, so it has some resemblance to reality, but it is not very meaningful and it's almost always very large. Then you can have ZFS filesystems that are completely full and, well, let me show you what happens there:

cks@sanjuan-fs3:~$ df -i /w/220
Filesystem      Inodes IUsed IFree IUse% Mounted on
<...>/w/220        144   144     0  100% /w/220

I suggest that you not try to graph 'free inodes over time' on a ZFS filesystem that is getting full, because it's going to be an alarming looking graph that contains no useful additional information.

The next piece of fun in the statvfs() API is how free and used disk space is reported. The 'struct statvfs' has, well, let me quote the Single Unix Specification:

f_bsize    File system block size. 
f_frsize   Fundamental file system block size. 

f_blocks   Total number of blocks on file system
           in units of f_frsize. 

f_bfree    Total number of free blocks. 
f_bavail   Number of free blocks available to 
           non-privileged process. 

When I was an innocent person and first writing code that interacted with statvfs(), I said 'surely f_frsize is always going to be something sensible, like 1 Kb or maybe 4 Kb'. Silly me. As you can find out using a program like GNU Coreutils stat(1), the actual 'fundamental filesystem block size' can vary significantly among different sorts of filesystems. In particular, ZFS advertises a 'fundamental block size' of 1 MByte, which means that all space usage information in statvfs() for ZFS filesystems has a 1 MByte granularity.

(On our Linux systems, statvfs() reports regular extN filesystems as having a 4 KB fundamental filesystem block size. On a FreeBSD machine I have access to, statvfs() mostly reports 4 KB but also has some filesystems that report 512 bytes. Don't even ask about the 'filesystem block size', it's all over the map.)

Also, notice that once again we have the issue where the amount of space in use must be reported indirectly, since we only have 'total blocks' and 'available blocks'. This is probably less important for total disk space, because that's less subject to variations than the total amount of inodes possible.

StatfsPeculiarities written at 23:46:13; Add Comment

2019-03-07

Exploring the mild oddity that Unix pipes are buffered

One of the things that blogging is good for is teaching me that what I think is common knowledge actually isn't. Specifically, when I wrote about a surprisingly arcane little Unix shell pipeline example, I assumed that it was common knowledge that Unix pipes are buffered by the kernel, in addition to any buffering that programs writing to pipes may do. In fact the buffering is somewhat interesting, and in a way it's interesting that pipes are buffered at all.

How much kernel buffering there is varies from Unix to Unix. 4 KB used to be the traditional size (it was the size on V7, for example, per the V7 pipe(2) manpage), but modern Unixes often have much bigger limits, and if I'm reading it right POSIX only requires a minimum of 512 bytes. But this isn't just a simple buffer, because the kernel also guarantees that if you write PIPE_BUF bytes or less to a pipe, your write is atomic and will never be interleaved with other writes from other processes.

(The normal situation on modern Linux is a 64 KB buffer; see the discussion in the Linux pipe(7) manpage. The atomicity of pipe writes goes back to early Unix and is required by POSIX, and I think POSIX also requires that there be an actual kernel buffer if you read the write() specification very carefully.)

On the one hand this kernel buffering and the buffering behavior makes perfect sense and it's definitely useful. On the other hand it's also at least a little bit unusual. Pipes are a unidirectional communication channel and it's pretty common to have unbuffered channels where a writer blocks until there's a reader (Go channels work this way by default, for example). In addition, having pipes buffered in the kernel commits the kernel to providing a certain amount of kernel memory once a pipe is created, even if it's never read from. As long as the read end of the pipe is open, the kernel has to hold on to anything it allowed to be written into the pipe buffer.

(However, if you write() more than PIPE_BUF bytes to a pipe at once, I believe that the kernel is free to pause your process without accepting any data into its internal buffer at all, as opposed to having to copy PIPE_BUF worth of it in. Note that blocking large pipe writes by default is a sensible decision.)

Part of pipes being buffered is likely to be due to how Unix evolved and what early Unix machines looked like. Specifically, V7 and earlier Unixes ran on single processor machines with relatively little memory and without complex and capable MMUs (Unix support for paged virtual memory post-dates V7, and I think wasn't really available on the PDP-11 line anyway). On top of making the implementation simpler, using a kernel buffer and allowing processes to write to it before there is a reader means that a process that only needs to write a small amount of data to a pipe may be able to exit entirely before the next process runs, freeing up system RAM. If writer processes always blocked until someone did a read(), you'd have to keep them around until that happened.

(In fact, a waiting process might use more than 4 KB of kernel memory just for various data structures associated with it. Just from a kernel memory perspective you're better off accepting a small write buffer and letting the process go on to exit.)

PS: This may be a bit of a just-so story. I haven't inspected the V7 kernel scheduler to see if it actually let processes that did a write() into a pipe with a waiting reader go on to potentially exit, or if it immediately suspended them to switch to the reader (or just to another ready to run process, if any).

BufferedPipes written at 22:43:42; Add Comment

2019-03-04

A surprisingly arcane little Unix shell pipeline example

In The output of Linux pipes can be indeterministic (via), Marek Gibney noticed that the following shell command has indeterminate output:

(echo red; echo green 1>&2) | echo blue

This can output any of "blue green" (with a newline between them), "green blue", or "blue"; the usual case is "blue green". Fully explaining this requires surprisingly arcane Unix knowledge.

The "blue green" and "green blue" outputs are simply a scheduling race. The 'echo green' and 'echo blue' are being run in separate processes, and which one of them gets executed first is up to the whims of the Unix scheduler. Because the left side of the pipeline has two things to do instead of one, often it will be the 'echo blue' process that wins the race.

The mysterious case is when the output is "blue" alone, and to explain this we need to know two pieces of Unix arcana. The first is our old friend SIGPIPE, where if a process writes to a closed pipe it normally receives a SIGPIPE signal and dies. The second is that 'echo' is a builtin command in shells today, and so the left side's 'echo red; echo green 1>&2' is actually all being handled by one process instead of the 'echo red' being its own separate process.

We get "blue" as the sole output when the 'echo blue' runs so soon that it exits, closing the pipeline, before the right left side can finish 'echo red'. When this happens the right left side gets a SIGPIPE and exits without running 'echo green' at all. This wouldn't happen if echo wasn't a specially handled builtin; if it was a separate command (or even if the shell forked to execute it internally), only the 'echo red' process would die from the SIGPIPE instead of the entire left side of the pipeline.

So we have three orders of execution:

  1. The shell on the left side gets through both of its echos before the 'echo blue' runs at all. The output is "green blue"

  2. The 'echo red' happens before 'echo blue' exits, so the left side doesn't get SIGPIPE, but 'echo green' happens afterwards. The output is "blue green".

  3. The 'echo blue' runs and exits, closing the pipe, before the 'echo red' finishes. The shell on the left side of the pipeline writes output into a closed pipe, gets SIGPIPE, and exits without going on to do the 'echo green'. The output is "blue".

The second order seems to be the most frequent in practice, although I'm sure it depends on a lot of things (including whether or not you're on an SMP system). One thing that may contribute to this is that I believe many shells start pipelines left to right, ie if you have a pipeline that looks like 'a | b | c | d', the main shell will fork the a process first, then the b process, and so on. All else being equal, this will give a an edge in running before d.

(This entry is adopted from my comment on lobste.rs, because why not.)

ShellPipelineIndeterminate written at 23:55:34; Add Comment

2019-02-17

Why I like middle mouse button paste in xterm so much

In my entry about how touchpads are not mice, I mused that one of the things I should do on my laptop was insure that I had a keyboard binding for paste, since middle mouse button is one of the harder multi-finger gestures to land on a touchpad. Kurt Mosiejczuk recently left a comment there where they said:

Shift-Insert is a keyboard equivalent for paste that is in default xterm (at least OpenBSD xterm, and putty on Windows too). I use that most of the time now as it seems less... trigger-happy than right click paste.

This sparked some thoughts, because I can't imagine giving up middle mouse paste if I have a real choice. I had earlier seen shift-insert mentioned in other commentary on my entry and so have tried a bit to use it on my laptop, and it hasn't really felt great even there; on my desktops, it's even less appealing (I tried shift-insert out there to confirm that it did work in my set of wacky X resources).

In thinking about why this is, I came to the obvious realization about why all of this is so. I like middle mouse button paste in normal usage because it's so convenient, because almost all of the time my hand is already on the mouse. And the reason my hand is already on the mouse is because I've just used the mouse to shift focus to the window I want to paste into. Even on my laptop, my right hand is usually away from the keyboard as I move the mouse pointer on the touchpad, making shift-Insert at least somewhat awkward.

(The exception that proves the rule for me is dmenu. Dmenu is completely keyboard driven and when I bring it up, Ctrl-Y to paste the current X selection is completely natural.)

I expect that people who use the keyboard to change window focus have a pretty different experience here, whether they're using a fully keyboard driven window manager or simply one where they use Alt-Tab (or the equivalent) to select through windows. My laptop's Cinnamon setup has support for Alt-Tab window switching, so perhaps I should try to use it more. On the other hand, making the text selection I'm copying is generally going to involve the mouse or touchpad, even on my laptop.

(I don't think I want to try keyboard window switching in my desktop fvwm setup for various reasons, including that I think you want to be using some version of 'click to focus' instead of mouse pointer based focus for this to really work out. Having the mouse pointer in the 'wrong' window for your focus policy seems like a recipe for future problems and unpleasant surprises. On top of that, X's handling of scroll wheels means that I often want the mouse pointer to be in the active window just so I can use my mouse's scroll wheel.)

PS: Even if it's possible to use keyboard commands to try to select things in xterm or other terminal emulators, I suspect that I don't want to bother trying it. I rather expect it would feel a lot like moving around and marking things in vi(m), with the added bonus of having to remember an entire different set of keystrokes that wouldn't work in Firefox and other non-terminal contexts.

MouseMovementAndPaste written at 22:46:50; Add Comment

2019-02-12

Using grep with /dev/null, an old Unix trick

Every so often I will find myself writing a grep invocation like this:

find .... -exec grep <something> /dev/null '{}' '+'

The peculiar presence of /dev/null here is an old Unix trick that is designed to force grep to always print out file names, even if your find only matches one file, by always insuring that grep has at least two files as arguments. You can wind up wanting to do the same thing with a direct use of grep if you're not certain how many files your wildcard may match. For example:

grep <something> /dev/null */*AThing*

This particular trick is functionally obsolete because pretty much all modern mainstream versions of grep support a -H argument to do the same thing (as the inverse of the -h argument that always turns off file names). This is supported in GNU grep and the versions of grep found in FreeBSD, OpenBSD, NetBSD, and Illumos. To my surprise, -H is not in the latest Single Unix Specification grep, so if you care about strict POSIX portability, you still need to use the /dev/null trick.

(I am biased, but I am not sure why you would care about strict POSIX portability here. POSIX-only environments are increasingly perverse in practice (arguably they always were).)

If you stick to POSIX grep you also get to live without -h. My usual solution to that was cat:

cat <whatever> | grep <something>

This is not quite a pointless use of cat, but it is an irritating one.

For whatever reason I remember -h better than I do -H, so I still use the /dev/null trick every so often out of reflex. I may know that grep has a command line flag to do what I want, but it's easier to throw in a /dev/null than to pause to reread the manpage if I've once again forgotten the exact option.

GrepDevNull written at 23:40:09; Add Comment

2019-02-02

A little appreciation for Vim's 'g' command

Although I've used vim for what is now a long time now, I'm still somewhat of a lightweight user and there are vast oceans of useful vim (and vi) commands that I either don't know at all or don't really remember and use only rarely. A while back I wrote down some new motion commands that I wanted to remember, and now I have a new command that I want to remember and use more of. That is vim's g command (and with it, its less common cousin, v), or if you prefer, ':g'.

Put simply, g (and v) are filters; you apply them to a range of lines, and for lines that they match (or don't match), they then run whatever additional commands you want. For instance, my recent use of g was that I had a file that listed a bunch of checks to do to a bunch of machines, one per line, and I wanted to comment out all lines that referred to a test machine. With g, this is straightforward:

:g/cksvm3/ s/^/#/

(There's a whole list of additional things and tricks you can do with g here.)

Since I just tested this, it's valid to stack g and v commands together, so you can comment out all mentions of a machine except for one check with:

:g/cksvm3/ v/staying/ s/^/#/

This works because the commands run by g and v are basically passed the matching line numbers, so the v command is restricted to checking the line(s) that g matched.

There are probably clever uses of g and v in programming and in writing text, but I expect to mostly use them when editing configuration files, since configuration files are things where lines are often important in isolation instead of as part of a block.

Vim (and vi before it) inherited g and v from ed, where it appears even in V7 ed. However, at least vim has expanded them from V7 ed, because in V7 ed you can't stack g and v commands (a limitation which was carried forward to 4.x BSD's ed).

(Amusingly, what prompted me about the existence of g and v in Vim was writing my entry on the differences between various versions of ed. Since they were in ed, I was pretty sure they were also in Vim, and then recently I had a use for g and actually remembered it.)

VimGCommandPraise written at 18:59:19; Add Comment

2019-01-07

Daemons and the pragmatics of unexpected error values from system calls

When I wrote about the danger of being overly specific in the errno values you look for years ago, I used the example of a SMTP server daemon that died when it got an unexpected error from accept(). Recently, John Wiersba asked in a comment:

I'm not clear what you're suggesting here. Isn't logging the error code and aborting the right thing to do with unexpected errors? [...]

In practice, there are two situations in Unix programs, especially in daemons. The first situation is where a system call is more or less done once, is not expected to fail at all, and cannot really be fixed if it does fail. Here you generally want to fail out on any error. The second situation is where the system call may fail for transient reasons. One case is certainly accept(), since accept() is trying to return two sorts of errors, but there are plenty of other cases where a system call may fail temporarily and then work later (as dozzie mentioned in comments to yesterday's entry on accept()).

In the second situation, you cannot tell transient errors from persistent ones, not in general, because Unixes add both transient and persistent errno values to system calls over time. In a program run by hand you can often punt; you assume that all errno values you don't specifically recognize mean persistent errors, exit on them, and leave it up to the user to run you again and hope that this time around it will work. In a daemon you don't have this luxury, so the pragmatic question is whether it's more likely that your daemon has hit a new transient errno value or a new persistent one.

My view is that in most environments, the more likely, better, and safer answer for a daemon is that the unrecognized new errno value is a transient error. You already know that transient errors are possible for this system call and you're handling some of them, and you know that over sufficiently large amounts of time your list of transient errno values will be incomplete. Often you don't really expect the system call to ever fail with a persistent error, because your program is not supposed to do things like close the wrong file descriptor. In the unlikely event that you hit an unrecognized persistent error and keep retrying futilely, you'll burn extra CPU and perhaps spam logs. If you exit instead, in the much more likely event that you hit an unrecognized transient error, you'll take down the daemon (as happened for our SMTP server).

(If you do expect a certain amount of persistent errors even in the normal operation of your daemon, you may want a different answer.)

PS: Even for non-daemon programs, 'exit and let the user try again' is not necessarily the best or the most usable answer. As a hypothetical example, if your program first tries to make an IPv6 connection and then falls back to trying an IPv4 one if it gets one of a limited set of errnos, a new or just unexpected 'this IPv6 connection will never work' errno will probably make your program unusable.

(For instance, you might be running on one of the uncommon Linux machines that has IPv6 dual binding turned off, giving you some new errno values you hadn't seen before.)

DaemonsAndUnexpectedErrors written at 21:21:30; Add Comment

2019-01-06

accept(2)'s problem of trying to return two different sorts of errors

A long time ago, I wrote about the dangers of being overly specific in the errno values you looked for, with the specific case being a daemon that exited because an accept() system call got an ECONNRESET that it didn't expect. Recently, John Wiersba left a comment on that entry asking what else the original programmer should have done, given an unexpected error from accept(). In thinking about the issues, I realized that part of the problem is that accept() is actually returning two different sorts of errors and the Unix API doesn't provide it any good way to let people tell the two different sorts apart.

(These days accept() is standardized to return ECONNABORTED instead of ECONNRESET in these circumstances, although this may not be universal.)

The two sorts of errors that accept() is trying to return are errors in the accept() call, such as a bad file descriptor (EBADF, ENOTSOCK) or a bad parameter (EFAULT), and errors in the new connection that accept() may or may not be returning (EAGAIN, ECONNABORTED, etc). One of the differences between the two is that the first sort of errors are probably permanent unless fixed by the program somehow and generally indicate an internal program error, while the second sort of errors will go away if you correctly loop through your accept() sequence again.

A sensibly behaving network daemon should definitely not exit when it gets the second sort of error; it should instead just continue on with its processing loop. However, it's perfectly sensible and probably broadly correct to exit if you get the first sort of error, especially if it's an unknown error and you have no idea how to correct it in your code. If someone has closed a file descriptor on you or it's become a non-socket somehow, continuing will generally just get you an un-ending stream of the same error over and over (and burn CPU, and perhaps flood logs). Exiting is a perfectly sensible way out and often really the only thing you can do.

However, you can't reliably distinguish between these two types of errors unless you believe you can know all of the possible errnos for one or the other of them. Given the general habit of Unixes of adding more errno returns for system calls over time, the practical reality is that you can't. This unfortunately leaves authors of Unix network daemons sort of up in the air; they have to pick one way or the other, and either way might give the wrong answer in some circumstances.

(Perhaps accept() should never have returned the second sort of errors, leaving them all to be discovered on a subsequent use of the file descriptor it returned. But that ship sailed a very long time ago; accept() returning these sorts of errors is even in the Single UNIX Specification for accept().)

I suspect that accept() is not the only the only system call with this sort of split in types of errors (although I can't think of any others off the top of my head). But thankfully I don't think there are too many others, because accept()'s pattern of operation is an unusual one.

PS: The Linux accept() manpage actually has a warning about Linux's behavior here, in the RETURN VALUE section. Linux opts to immediately return a lot of errors detected on the new socket, while other Unixes generally postpone some of them. But note that any Unix can return ECONNABORTED.

AcceptErrnoProblem written at 23:11:46; Add Comment

(Previous 10 or go back to December 2018 at 2018/12/27)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.