Wandering Thoughts

2019-06-26

A hazard of our old version of OmniOS: sometimes powering off doesn't

Two weeks ago, I powered down all of our OmniOS fileservers that are now out of production, which is most of them. By that, I mean that I logged in to each of them via SSH and ran 'poweroff'. The machines disappeared from the network and I thought nothing more of it.

This Sunday morning we had a brief power failure. In the aftermath of the power failure, three out of four of the OmniOS fileservers reappeared on the network, which we knew mostly because they sent us some email (there were no bad effects of them coming back). When I noticed them back, I assumed that this had happened because we'd set their BIOSes to 'always power on after a power failure'. This is not too crazy a setting for a production server you want up at all costs because it's a central fileserver, but it's obviously no longer the setting you want once they go out of production.

Today, I logged in to the three that had come back, ran 'poweroff' on them again, and then later went down to the machine room to pull out their power cords. To my surprise, when I looked at the physical machines, they had little green power lights that claimed they were powered on. When I plugged in a roving display and keyboard to check their state, I discovered that all three were still powered on and sitting displaying an OmniOS console message to the effect that they were powering off. Well, they might have been trying to power off, but they weren't achieving it.

I rather suspect that this is what happened two weeks ago, and why these machines all sprang back to life after the power failure. If OmniOS never actually powered the machines off, even a BIOS setting of 'resume last power state after a power failure' would have powered the machines on again, which would have booted OmniOS back up again. Two weeks ago, I didn't go look at the physical servers or check their power state through their lights out management interface; it never occurred to me that 'poweroff' on OmniOS sometimes might not actually power the machine off, especially when the machines did drop off the network.

(One out of the four OmniOS servers didn't spring back to life after the power failure, and was powered off when I looked at the hardware. Perhaps its BIOS was set very differently, or perhaps OmniOS managed to actually power it off. They're all the same hardware and the same OmniOS version, but the server that probably managed to power off had no active ZFS pools on our iSCSI backends; the other three did.)

At this point, this is only a curiosity. If all goes well, the last OmniOS fileserver will go out of production tomorrow evening. It's being turned off as part of that, which means that I'm going to have to check that it actually powered off (and I'd better add that to the checklist I've written up).

solaris/HazardOfNoPoweroff written at 01:01:17; Add Comment

2019-06-24

The convenience (for me) of people writing commands in Python

The other day I was exploring Certbot, which is more or less the standard and 'as official as it ever gets' client for Let's Encrypt, and it did something that I objected to. Certbot is a very big program with a great many commands, modes, options, settings, and so on, and this was the kind of thing where I wasn't completely confident there even was a way to disable it. However, sometimes I'm a system programmer and the particular thing had printed a distinctive message. So, off to the source code I went with grep (okay, ripgrep), to find the message string and work backward from there.

Conveniently, Certbot is written in Python, which has two advantages here. The first advantage is that I actually know Python, which makes it easier to follow any logic I need to follow. The second is that Python programs intrinsically come with their source code, just as the standard library does. Certbot is open source and I was installing Ubuntu's official package for it, which gave me at least two ways of getting the source code, but there's nothing like not even having to go to the effort.

I was going to say that this is intrinsic in Python being an interpreted language, but that's not necessarily the case; instead, it's that way for both cultural and technical reasons. Java is generally interpreted by the JVM, but that's Java bytecode, not Java source code. Javascript is interpreted from its source code, but people who ship Javascript often ship it in minimized and compacted form for performance reasons, and in in practice this makes it unreadable (at least for my purposes).

(And then there's WebAssembly.)

Another cultural aspect of this is that a lot of commands written in Python are written in relatively straightforward ways that are easy to follow; you can usually grep through the code for what function something is in, then what calls that function, and so on and so forth. This is not a given and it's quite possible to create hard to follow tangles of magic (I've sort of done this in the past) or a tower of classes inside classes that are called through hard to follow patterns of delegation, object instantiation, and so on. But it's at least unusual, especially in relatively straightforward commands and in code bases that aren't too large.

(Things that are part of a framework may be a different story.)

PS: Certbot is on the edge of 'large' here, but for what I was looking for it was still functions calling functions.

PPS: That installing a Python thing gives you a bunch of .py files on your filesystem is not a completely sure thing. I believe that there are Python package and module distribution formats that don't unpack the .py files but leave them all bundled up, although the current Wheel format is apparently purely for distribution, not running in place. I am out of touch with the state of Python package distribution, so I don't know how this goes if you install things yourself.

python/SourceCodeIncluded written at 21:46:25; Add Comment

2019-06-23

What it takes to run a 32-bit x86 program on a 64-bit x86 Linux system

Suppose that you have a modern 64-bit x86 Linux system (often called an x86_64 environment) and that you want to run an old 32-bit x86 program on it (a plain x86 program). What does this require from the overall system, both the kernel and the rest of the environment?

(I am restricting this to ELF programs, not very old a.out ones.)

At a minimum, this requires that the (64-bit) kernel support programs running in 32-bit mode ('IA32') and making 32-bit kernel calls. Supporting this is a configuration option in the kernel (or actually a whole collection of them, but they mostly depend on one called IA32_EMULATION). Supporting 32-bit calls on a 64-bit kernel is not entirely easy because many kernel calls involve structures; those structures must be translated back and forth between the kernel's native 64-bit version and the emulated 32-bit version. This can raise questions of how to handle native values that exceed what can fit in the fields of the 32-bit structures. The kernel also has a barrel full of fun in the form of ioctl(), which smuggles a lot of structures in and out of the kernel in relatively opaque ways. A 64-bit kernel does want to support at least some 32-bit ioctls, such as the ones that deal with (pseudo-)terminals.

(I suspect that there are people in the Linux kernel community who hope that all of this emulation and compatibility code can someday be removed.)

A modern kernel dealing with modern 32-bit programs also needs to provide a 32-bit vDSO, and the necessary information to let the program find it. This requires the kernel to carry around a 32-bit ELF image, which has to be generated somehow (at some point). The vDSO is mapped into the memory space of even statically compiled 32-bit programs, although they may or may not use it.

(In ldd output on dynamically linked 32-bit programs, I believe this often shows up as a magic 'linux-gate.so.1'.)

This is enough for statically compiled programs, but of course very few programs are statically compiled. Instead, almost all 32-bit programs that you're likely to encounter are dynamically linked and so require a collection of additional compiled things. Running a dynamically linked program requires at least a 32-bit version of its dynamic linker (the 'ELF interpreter'), which is usually 'ld-linux.so.2'. Generally the 32-bit program will then go on to require additional 32-bit shared libraries, starting with the 32-bit C library ('libc.so.6' and 'libdl.so.2' for glibc) and expanding from there. The basic shared libraries usually come from glibc, but you can easily need additional ones from other packages for things like curses or the collection of X11 shared libraries. C++ programs will need libstdc++, which comes from GCC instead of glibc.

(The basic dynamic linker, ld-linux.so.2, is also from glibc.)

In order to do things like hostname lookups correctly, a 32-bit program will also need 32-bit versions of any NSS modules that are used in your /etc/nsswitch.conf, since all of these are shared libraries that are loaded dynamically by glibc. Some of these modules come from glibc itself, but others are provided by a variety of additional software packages. I'm not certain what happens to your program's name lookups if a relevant NSS module is not available, but at a minimum you won't be able to correctly resolve names that come from that module.

(You may not get any result for the name, or you might get an incorrect or incomplete result if another configured NSS module also has an answer for you. Multiple NSS modules are common for things like hostname resolution.)

I believe that generally all of these 32-bit shared libraries will have to be built with a 32-bit compiler toolchain in an environment that itself looks and behaves as 32-bit as possible. Building 32-bit binaries from a 64-bit environment is theoretically possible and nominally simple, but in practice there have been problems, and on top of that many build systems don't support this sort of cross-building.

(Of course, many people and distributions already have a collection of 32-bit shared libraries that have been built. But if they need to be rebuilt or updated for some reason, this might be an issue. And of course the relevant shared library needs to (still) support being built as 32-bit instead of 64-bit, as does the compiler toolchain.)

linux/32BitProgramOn64BitSystem written at 22:19:30; Add Comment

2019-06-22

Google Groups entirely ignores SMTP time rejections

I have a very old mail address that I have made into a complete spamtrap through my sinkhole SMTP server. Simplifying only slightly, my SMTP server rejects all email to this address no later than after the end of transferring the message (at the message termination after SMTP DATA). When it rejects email this late, I capture the full message.

On September 5th of last year (2018), this spamtrap email address rejected a message from Google Groups informing it that it had been added to a spam mailing list:

Subject: You have been added to Equity Buyers Network
From: Equity Buyers Network <equitybuyersgroup+noreply@googlegroups.com>

Google Groups ignored this rejection and began sending email messages from the group/mailing list to my spamtrap address. Each of these messages was rejected at SMTP time, and each of them contained a unique MAIL FROM address (a VERP), which good mailing list software uses to notice delivery failures and unsubscribe addresses. Google Groups is, of course, not good mailing list software, since it entirely ignored the rejections. I expect that this increases the metrics of things like 'subscribers to Google Groups' and 'number of active Google Groups' and others that the department responsible for Google Groups is rewarded for. Such is the toxic nature of rewarding and requiring 'engagement', especially without any care for the details.

Since September 5th of 2018 up until yesterday, the spamtrap address has rejected ten email messages. These came September 10th, September 21st, November 20th and 23rd, December 22nd, 24th, and 31st (the last of which prompted this entry), February 4th, and now June 12th and 21st. Many of them were stock touting spam (or in general asset touting spam); some were touting the 'message broadcasting' services of the company that established the mailing list. A couple were both at once, and if you guessed that this involves cryptocurrency (or at least something that sounds like it), you would be correct.

Due to writing this entry and thinking about the issue, I've changed my spamtrap system so that it will no longer accept any email from Google Groups and will reject such email attempts immediately as 'uninteresting spam' that is not even specifically logged. I'm not interested in being part of Google's outsourced abuse department, and this insures that I don't have to think about this any more.

(At one point I might have been interested in just what spammers set up on Google Groups, but I no longer am. That Google Groups is a de facto spam service is no longer news and I am not interested in the specifics, any more than I am with, say, Yahoo Groups.)

Sidebar: Who the responsible party is

The claimed information is a cluster of corporate names and two listed addresses. I will let you consult search engines for their websites, since I have no desire to contribute anything even approaching links. They are:

Questrust Ventures Inc
Equity Buyers
Iastra Broadcasting

Equity Buyers Group
1875 Avenue of the Stars Ste 2115
Century City CA 90067
310-894-9854

iAstra Canada Richmond BC, Canada, v7y-3j5

That this organization claims a Canadian address (which may or may not be real) means that they theoretically definitely fall within the reach of Canada's anti-spam laws, which they are pretty definitely acting in violation of. Of course the odds that they will ever be held to account for that are probably low.

spam/GoogleGroupsIgnoresRejection written at 17:24:48; Add Comment

We get a certain amount of SMTP MAIL FROM's in UTF-8 with odd characters

On Twitter, I said:

There sure are a surprising number of places that are trying to send us SMTP MAIL with a MAIL FROM that contains the Unicode character U+FEFF (either 'zero width no-break space' or a byte order mark, apparently, although it's never at the start of the address).

I was looking at the logs on our external mail gateway machine because we use Exim and I was interested to see if we had been poked by anyone trying to exploit CVE-2019-10149. I didn't find anyone trying, but I did turn up these SMTP MAIL FROMs with U+FEFF.

A typical example from today is:

H=(luxuryclass.it) [177.85.200.254] rejected MAIL <Antonio<U+FEFF>Smith@luxuryclass.it>

(The '<U+FEFF>' bit is me cutting and pasting from less; less shows Unicode codepoints this way.)

These and similar hijinks have been going on for some time. We have logs going back more than a year, and the earliest hit I can casually turn up is in late May of 2018:

H=(03216a51.newslatest.bid) [217.13.107.153] rejected MAIL <NaturalHairCare<U+200B>@newslatest.bid>

(U+200B is a zero width space, so this feels like something similar to the use of U+FEFF.)

In October of 2018, we saw a few uses of U+200E 'left to right mark':

H=(0008ceef.livetofrez.us) [172.245.45.182] rejected MAIL <TinnitusRelief<U+200E>@livetofrez.us>

Then at the start of November of 2018 we started seeing U+FEFF, which has taken over as the Unicode codepoint of choice to (ab)use:

H=(office365zakelijk.nl) [190.108.194.226] rejected MAIL <Howard<U+FEFF>Smith@office365zakelijk.nl>

We have seen a flood of these since then; they're pervasive in our logs based purely on looking at things in less (someday I will work out how to grep for Unicode codepoints by codepoint value, but that day is not today).

On a quick check, the most recent ones come from IP addresses that are listed in the SBL CSS, as well as any number of other DNS blocklists. I don't really care, since as long as they're helpful enough to put UTF-8 bytes into their MAIL FROM, we'll reject all of their email.

PS: I checked the raw bytes of some of the U+FEFF MAIL FROMs, and they really have the byte sequence 0xEF 0xBB 0xBF that is a true UTF-8 encoded U+FEFF. I'm relatively confident that Exim isn't doing any character mangling on the way through, either, so we're almost certainly seeing what was really on the wire.

spam/SMTPMailFromWithUTF8 written at 00:43:14; Add Comment

2019-06-21

One of the things a metrics system does is handle state for you

Over on Mastodon, I said:

Belated obvious realization: using a metrics system for alerts instead of hand-rolled checks means that you can outsource handling state to your metrics systems and everything else can be stateless. Want to only alert after a condition has been true for an hour? Your 'check the condition' script doesn't have to worry about that; you can leave it to the metrics system.

This sounds abstract, so let me make it concrete. We have some self serve registration portals that work on configuration files that are automatically checked into RCS every time the self-serve systems do something. As a safety measure, the automated system refuses to do anything if the file is either locked or has uncommitted changes; if it touches the file, it might collide with other things being done to it. These files can also be hand-edited, for example to remove an entry, and when we do this we don't always remember that we have to commit the file.

(Or we may be distracted, because we are trying to work fast to lock a compromised account as soon as possible.)

Recently, I was planning out how to detect this situation and send out alerts for it. Given that we have a Prometheus based metrics and alerting system, one approach is to have a hand rolled script that generates an 'all is good' or 'we have problems' metric, feed that into Prometheus, let Prometheus grind it through all of the gears of alert rules and so on, and wind up with Alertmanager sending us email. But this seems like a lot of extra work just to send email, and it requires a new alert rule, and so on. Using Prometheus also constrains what additional information we can put in the alert email, because we have to squeeze it all through the narrow channel of Prometheus metrics, the information that an alert rule has readily available, and so on. At first blush, it seemed simpler to just have the hand rolled checking script send the email itself, which would also let the email message be completely specific and informative.

But then I started thinking about that in more detail. We don't want the script to be hair trigger, because it might run while we were in the middle of editing things (or the automated system was making a change); we need to wait a bit to make sure the problem is real. We also don't want to send repeat emails all the time, because it's not that critical (the self-serve registration portals aren't used very frequently). Handling all of this requires state, and that means something has to handle that state. You can handle state in scripts, but it gets complicated. The more I thought about it, the more attractive it was to let Prometheus handle all of that; it already has good mechanisms for 'only trigger an alert if it's been true for X amount of time' and 'only send email every so often' and so on, and it's worried about more corner cases than I have.

The great advantage of feeding 'we have a problem/we have no problem' indications into the grinding maw of Prometheus merely to have it eventually send us alert email is that the metrics system will handle state for us. The extra custom things that we need to write, our highly specific checks and so on, are spared from worrying about all of those issues, which makes them simpler and more straightforward. To use jargon, the metrics system has enabled a separation of concerns.

PS: This isn't specific to Prometheus. Any metrics and alerting system has robust general features to handle most or even all of these issues. And Prometheus itself is not perfect; for example, it's awkward at best to set up alerts that trigger only between certain times of the day or on certain days of the week.

sysadmin/MetricsSystemHandlesState written at 00:21:02; Add Comment

2019-06-19

How Bash decides it's being invoked through sshd and sources your .bashrc

Under normal circumstances, Bash only sources your .bashrc when it's run as an interactive non-login shell; for example, this is what the Bash manual says about startup files. Well, it is most of what the manual says, because there is an important exception, which the Bash manual describes as 'Invoked by remote shell daemon':

Bash attempts to determine when it is being run with its standard input connected to a network connection, as when executed by the remote shell daemon, usually rshd, or the secure shell daemon sshd. If Bash determines it is being run in this fashion, it reads and executes commands from ~/.bashrc, [...]

(You can tell how old this paragraph of the manual is because of how much prominence it gives to rshd. Also, note that this specific phrasing about standard input presages my discovery of when bash doesn't do this.)

As the result of recent events, I became interested in discovering exactly how Bash decides that it's being run in the form of 'ssh host command' and sources your .bashrc. There turn out to be two parts to this answer, but the summary is that if this is enabled at all, Bash will always source your .bashrc for non-interactive commands if you've logged in to a machine via SSH.

First, this feature may not even be enabled in your version of Bash, because it's a non-default configuration setting (and has been since Bash 2.05a, which is pretty old). Debian and thus Ubuntu turn this feature on, as does Fedora, but the FreeBSD machine I have access to doesn't in the version of Bash that's in its ports. Unsurprisingly, OmniOS doesn't seem to either. If you compile Bash yourself without manually changing the relevant bit of config-top.h, you'll get a version without this.

(Based on some digging, I think that Arch Linux also builds Bash without enabling this, since they don't seem to patch config-top.h. I will leave it to energetic people to check other Linuxes and other *BSDs.)

Second, how it works is actually very simple. In practice, a non-interactive Bash decides that it is being invoked by SSHD if either $SSH_CLIENT or $SSH2_CLIENT are defined in the environment. In a robotic sense this is perfectly correct, since OpenSSH's sshd puts $SSH_CLIENT in the environment when you do 'ssh host command'. In practice it is wrong, because OpenSSH sets $SSH_CLIENT all the time, including for logins. So if you use SSH to log in somewhere, $SSH_CLIENT will be set in your shell environment, and then any non-interactive Bash will decide that it should source ~/.bashrc. This includes, for example, the Bash that is run (as 'bash -c ...') to execute commands when you have a Makefile that has explicitly set 'SHELL=/bin/bash', as Makefiles that are created by the GNU autoconfigure system tend to do.

As a result, if you have ancient historical things in a .bashrc, for example clearing the screen on exit, then surprise, those things will happen for every command that make runs. This may not make you happy. For situations like Makefiles that explicitly set 'SHELL=/bin/bash', this can happen even if you don't use Bash as your login shell and haven't had anything to do with it for years.

(Of course it also happens if you have perfectly modern things there and expect that they won't get invoked for non-interactive shells, and you do use Bash as your login shell. But if you use Bash as your login shell, you're more likely to notice this issue, because routine ordinary activities like 'ssh host command' or 'rsync host:/something .' are more likely to fail, or at least do additional odd things.)

PS: This October 2001 comment in variables.c sort of suggests why support for this feature is now an opt-in thing.

PPS: If you want to see if your version of Bash has this enabled, the simple way to tell is to run strings on the binary and see if the embedded strings include 'SSH_CLIENT'. Eg:

; /etc/fedora-release 
Fedora release 29 (Twenty Nine)
; strings -a /usr/bin/bash | fgrep SSH_CLIENT
SSH_CLIENT

So the Fedora 29 version does have this somewhat dubious feature enabled. Perhaps Debian and Fedora feel stuck with it due to very long-going backwards compatibility, where people would be upset if Bash stopped doing this in some new Debian or Fedora release.

Sidebar: The actual code involved

The code for this can currently be found in run_startup_files in shell.c:

  /* get the rshd/sshd case out of the way first. */
  if (interactive_shell == 0 && no_rc == 0 && login_shell == 0 &&
      act_like_sh == 0 && command_execution_string)
    {
#ifdef SSH_SOURCE_BASHRC
      run_by_ssh = (find_variable ("SSH_CLIENT") != (SHELL_VAR *)0) ||
                   (find_variable ("SSH2_CLIENT") != (SHELL_VAR *)0);
#else
      run_by_ssh = 0;
#endif

[...]

Here we can see that the current Bash source code is entirely aware that no one uses rshd any more, among other things.

unix/BashDetectRemoteInvocation written at 22:50:11; Add Comment

A Let's Encrypt client feature I always want for easy standard deployment

On Twitter, I said:

It bums me out that Certbot (the 'official' Let's Encrypt client) does not have a built-in option to combine trying a standalone HTTP server with a webroot if the standalone HTTP server can't start.

(As far as I can see.)

For Let's Encrypt authentication, 'try a standalone server, then fall back to webroot' lets you create a single setup that works in a huge number of cases, including on initial installs before Apache/etc has its certificates and is running.

In straightforward setups, the easiest way to prove your control of a domain to Let's Encrypt is generally their HTTP authentication method, which requires the host (or something standing in for it) to serve a file under a specific URL. To do this, you need a suitably configured web server.

Like most Let's Encrypt clients, Certbot supports both putting the magic files for Let's Encrypt in a directory of your choice (which is assumed to already be configured in some web server you're running) or temporarily running its own little web server to do this itself. But it doesn't support trying both at once, and this leaves you with a problem if you want to deploy a single standard Certbot configuration to all of your machines, some of which run a web server and some of which don't. And on machines that do run a web server, it is not necessarily running when you get the initial TLS certificates, because at least some web servers refuse to start at all if you have HTTPS configured and the TLS certificates are missing (because, you know, you haven't gotten them yet).

Acmetool, my favorite Let's Encrypt client, supports exactly this dual-mode operation and it is marvelously convenient. You can run one acmetool command no matter how the system is configured, and it works. If acmetool can bind to port 80, it runs its own web server; if it can't, it assumes that your webroot setting is good. But, unfortunately, we need a new Let's Encrypt client.

For Certbot, I can imagine a complicated scheme of additional software and pre-request and post-request hooks to make this work; you could start a little static file only web server if there wasn't already something on port 80, then stop it afterward. But that requires additional software and is annoyingly complicated (and I can imagine failure modes). For extra annoyance, it appears that Certbot does not have convenient commands to change the authentication mode associated configured for any particular certificate (which will be used when certbot auto-renews it, unless you hard-code some method in your cron job). Perhaps I am missing something in the Certbot documentation.

(This is such an obvious and convenient feature that I'm quite surprised that Certbot, the gigantic featureful LE client that it is, doesn't support it already.)

sysadmin/LetsEncryptEasyDeployWant written at 01:13:03; Add Comment

2019-06-17

Sometimes, the problem is in a system's BIOS

We have quite a number of donated Dell C6220 blade servers, each of which is a dual socket machine with Xeon E5-2680s. Each E5-2680 is an 8-core CPU with HyperThreads, so before we turned SMT off the machines reported as having 32 CPUs, or 16 if you either turned SMT off or had to disable one socket (and once you have to do both, you're down to 8 CPUs). These days, most of these machines have been put in a SLURM-based scheduling system that provides people with exclusive access to compute servers.

Once upon a time recently, we noticed that the central SLURM scheduler was reporting that one of these machines had two CPUs, not (then) 32. When we investigated, this turned out not to be some glitch in the SLURM daemons or a configuration mistake, but actually what the Linux kernel was seeing. Specifically, as far as the kernel could see, the system was a dual socket system with each socket having exactly one CPU (still Xeon E5-2680s, though). Although we don't know exactly how it happened, this was ultimately due to BIOS settings; when my co-worker went into the BIOS to check things, he found that the BIOS was set to turn off both SMT and all extra cores on each socket. Turning on the relevant BIOS options restored the system to its full expected 32-CPU count.

(We don't know how this happened, but based on information from our Prometheus metrics system it started immediately after our power failure; we just didn't notice for about a month and a half. Apparently the BIOS default is to enable everything, so this is not as simple as a reversion to defaults.)

If nothing else, this is a useful reminder to me that BIOSes can do weird things and can be set in weird ways. If nothing else makes sense, well, it might be in the BIOS. It's worth checking, at least.

(We already knew this about Dell BIOSes, of course, because our Dell R210s and R310s came set with the BIOS disabling all but the first drive. When you normally use mirrored system disks, this is first mysterious and then rather irritating.)

sysadmin/BIOSCoresShutdown written at 23:16:37; Add Comment

My Mastodon remark about tiling window managers

Over on Mastodon, I said:

Two reasons that I'm unlikely to like tiling window managers are that I like empty space (and lack of clutter) on my desktop and I don't like too-large windows. Filling all the space with windows of some size is thus very much not what I want, and I definitely have preferred sizes and shapes for my common windows.

On the one hand, I've already written an entry on my views on tiling window managers. On the other hand, I don't think I've ever managed to state them so succinctly, and I find myself not wanting to just leave that lost in the depths of Mastodon.

(Looking back at that entry caused me to re-read its comments and realize that they may be where I found out about Cinnamon's keybindings for tiling windows, which would answer a parenthetical mystery.)

links/MeOnTilingWMs written at 19:09:46; Add Comment

(Previous 10 or go back to June 2019 at 2019/06/16)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.