Wandering Thoughts

2023-02-06

Rsync'ing (only) some of the top level pieces of a directory

Suppose, not hypothetically, that you have a top level directory which contains some number of subdirectories, and you want to selectively create and maintain a copy of only part of this top level directory. However, what you want to copy over changes over time and you want un-wanted things to disappear on the destination (because otherwise they'll stick around using up space that you need for things you care about). Some of the now-unwanted things will still exist on the source but you don't want them on the copy any more; others will disappear entirely on the source and need to disappear on the destination too.

This sounds like a tricky challenge with rsync but it turns out that there is a relatively straightforward way to do it. Let's say that you want to decide what to copy based (only) on the modification time of the top level subdirectories; you want a copy of all recently modified subdirectories that still exist on the source. Then what you want is this:

cd /data/prometheus/metrics2
find * -maxdepth 0 -mtime -365 -print |
 sed 's;^;/;' |
  rsync -a --delete --delete-excluded \
        --include-from - --exclude '/*' \
        . backupserv:/data/prometheus/metrics2/

Here, the 'find' prints everything in the top level directory that's been modified within the last year. The 'sed' takes that list of names and sticks a '/' on the front, turning names like 'wal' into '/wal', because to rsync this definitely anchors them to the root of the directory tree being (recursively) transferred (per rsync's Pattern Matching Rules and Anchoring Include/Exclude Patterns). Finally, the rsync command says to delete now-gone things in directories we transfer, delete things that are excluded on the source but present on the destination, include what to copy from standard input (ie, our 'sed'), and then exclude everything that isn't specifically included.

(All of this is easier than I expected when I wrote my recent entry on discovering this problem; I thought I might have to either construct elaborate command line arguments or write some temporary files. That --include-from will read from standard input is very helpful here.)

If you don't think to check the rsync manual page, especially its section on Filter Rules, you can have a little rsync accident because you absently think that rsync is 'last match wins' instead of 'first match wins' and put the --exclude before the --include-from. This causes everything to be excluded, and rsync will dutifully delete the entire multi-terabyte copy you made in your earlier testing, because that's what you told it to do when you used --delete-excluded.

(In general I should have carefully read all of the rsync manual page's various sections on pattern matching and filtering. It probably would have saved me time, and it would definitely have left me better informed about how rsync actually behaves.)

sysadmin/RsyncRecentDirectoryContents written at 23:08:38; Add Comment

2023-02-05

Some things on Prometheus's new feature to keep alerts firing for a while

In the past I've written about maybe avoiding flapping Prometheus alerts, which is a topic of interest to us for obvious reasons. One of the features in Prometheus 2.42.0 is a new 'keep_firing_for' setting for alert rules (documented in Recording rules, see also the pull request). As described in the documentation, it specifies 'how long an alert will continue firing after the condition that triggered it has cleared' and defaults to being off (0 seconds).

The obvious use of 'keep_firing_for' is to avoid having your alerts flap too much. If you set it to some non-zero value, say a minute, then if the alert condition temporarily goes away only to come back within a minute, you won't potentially wind up notifying people that the alert went away then notify them again that it came back. I say 'potentially', because when you can get notified about an alert going away is normally quantized by your Alertmanager group_interval setting. This simple alert rule setting can replace more complex methods of avoiding flapping alerts, and so there are various people who will likely use it.

When 2.42.0 came out recently with this feature, I started thinking about whether we would use it. My reluctant conclusion is that we probably won't in most places, because it doesn't do quite what we want and it has some side effects that we care about (although these side effects are the same as most of the other ways of avoiding flapping alerts). The big side effect is that this doesn't delay or suppress notifications about the alert ending, it delays the alert itself ending. The delay in notification is a downstream effect of the alert itself remaining active. If you care about being able to visualize the exact time ranges of alerts in (eg) Grafana, then artificially keeping alerts firing may not be entirely appealing.

(This is especially relevant if you keep your metrics data for a long time, as we do. Our alert rules evolve over time, so without a reliable ALERTS metric we might have to go figure out the historical alert rule to recover the alert end time for a long-past alert.)

This isn't the fault of 'keep_firing_for', which is doing exactly what it says it does and what people have asked for. Instead it's because we care (potentially) more about delaying and aggregating alert notifications than we do about changing the timing of the actual alerts. What I actually want is something rather more complicated than Alertmanager supports, and is for another entry.

sysadmin/PrometheusOnExtendingAlerts written at 22:55:15; Add Comment

2023-02-04

The practical appeal of a mesh-capable VPN solution

The traditional way to do a VPN is that your VPN endpoint ('server') is the single point of entry for all traffic from VPN clients. When a VPN client talks to anything on your secured networks, it goes through the endpoint. In what I'm calling a mesh-capable VPN, you can have multiple VPN endpoints, each of them providing access to a different network area or service. Because it's one VPN, you still have a single unified client identity and authentication and a single on/off button for the VPN connection on clients.

(WireGuard is one of the technologies that can be used to implement a mesh-capable VPN. WireGuard can be used to build a full peer to peer mesh, not just a VPN-shaped one.)

A standard, non-mesh VPN is probably going to be simpler to set up and it gives you a single point of control and monitoring over all network traffic from VPN clients. Despite that, I think that mesh-capable VPNs have some real points of appeal. The big one is that you don't have to move all of your VPN traffic through a single endpoint. Instead you can distribute the load of the traffic across multiple endpoints, going right down to individual servers for particular services. As an additional benefit, this reduces the blast radius of a given VPN endpoint failing, especially if you give critical services their own on-machine VPN endpoints so that if the service is up, people can reach it over the VPN.

This is probably not a big concern if your VPN isn't heavily or widely used. It becomes much more important if you expect many people to access most of your services and resources over your VPN, for example because you've decided to make your VPN your primary point of Multi-Factor Authentication (so that people can MFA once to the VPN and then be set for access to arbitrary internal services). If you're expecting hundreds of people to send significant traffic through your VPN to heavily used services, you're normally looking at a pretty beefy VPN server setup. If you can use a mesh-capable VPN to offload that traffic to multiple endpoints, you can reduce your server needs. If you can push important, heavy traffic down to the individual servers involved, this can take your nominal 'VPN endpoint' mostly out of the picture except for any initial authentication it needs to be involved in.

Another feature of a mesh-capable VPN is that the VPN endpoints don't even have to all be on the same network. For example, if you split endpoints between internal and external traffic, you could put the external traffic VPN endpoint in a place that's outside of your regular network perimeter (and so isn't contending for perimeter bandwidth and firewall capacity). In some environments you wouldn't care very much about external traffic and might not even support it, but in our environment we need to let people use our IPs for their outgoing traffic if they want to.

A mesh-capable VPN can also be used for additional tricks if you can restrict access to parts of the mesh based on people's identities. This can be useful to screen access to services that have their own authentication, or to add authentication and access restrictions to services that are otherwise open (or at least have uncomfortably low bars on their 'authentication', and perhaps you don't trust their security). If you can easily extract identification information from the VPN's mesh, you could even delegate authentication to the VPN itself rather than force people to re-authenticate to services.

(In theory this can be done with a normal VPN endpoint too, but in practice there are various issues, including a trust issue where everyone else has to count on the VPN endpoint always assigning the right IP to the right person and doing the access restrictions right. In practice people will likely be more comfortable with a bunch of separate little VPNs; there's the general use one, the staff one, the one a few people can use to get access to the subnet of laboratory equipment that has never really heard of 'access control', and so on.)

sysadmin/VPNMeshAppeal written at 22:15:28; Add Comment

2023-02-03

In a university, people want to use our IPs even for external traffic

Suppose that your organization has a VPN server that people use to access internal resources that you don't expose to the Internet. One of the traditional decisions you had to make when you were setting up such a VPN server was whether you would funnel all traffic over the VPN, no matter where it was to, or whether you'd funnel only internal traffic and let external traffic go over people's regular Internet connections. In many environments the answer is that the VPN server is only really for internal traffic; it's either discouraged or impossible to use it for external traffic.

Universities are not one of those places. In universities, quite often you'll find that people actively need to use your VPN server for all of their traffic, or otherwise things will break in subtle ways. One culprit is the world of academic publishing, or more exactly online electronic access to academic publications. These days, many of these online publications are provided to you directly by the publisher's website. This website decides if you are allowed to access things by seeing if your institution has purchased access, and it often figures out your institution by looking at your IP address. As a result, if a researcher is working from home but wants to read things, their traffic had better be coming from your IP address space.

(There are other access authentication schemes possible, but this one is easy for everyone to set up and understand, and it doesn't reveal very much to publishers. Universities rarely change their IP address space, and in the before times you could assume that most researchers were working from on-campus most of the time.)

In an ideal world, academic publishers (and other people restricting access to things to your institution) could tell you what IP addresses they would be using, so you could add them to your VPN configuration as a special exemption (ie, as part of the IP address space that should be sent through the VPN). In the real world, there are clouds, frontend services, and many other things that mean the answer is large, indeterminate, and possibly changing at arbitrary times, sometimes out of the website operator's direct control. Also, the visible web site that you see may be composited (in the browser) from multiple sources, with some sub-resources quietly hosted in some cloud. For sensible reasons, the website engineering team does not want to have to tell the customer relations team every time they want to change the setup and then possibly wait for a while as customers get onboard (or don't).

Our VPNs default to sending all of people's traffic through us. At one point we considered narrowing this down (for reasons); feedback from people around here soon educated us that this was not feasible, at least not while keeping our VPN really useful to them. When you're a university, people want your IPs, and for good reasons.

tech/UniversityPeopleWantOurIPs written at 23:21:38; Add Comment

2023-02-02

A gotcha when making partial copies of Prometheus's database with rsync

A while back I wrote about how you can sensibly move or copy Prometheus's time series database (TSDB) with rsync. This is how we moved our TSDB, with metrics data back to late 2018, from a mirrored pair of 4 TB HDDs on one server to a mirrored pair of 20 TB HDDs on another one. In that entry I also mentioned that we were hoping to use this technique to keep a partial backup of our TSDB, one that covered the last year or two. It turns out that there is a little gotcha in doing this that makes it trickier than it looks.

The idea way to do such a partial backup was if rsync could exclude or include files based on their timestamp. Unfortunately, as far as I know it can't do that. Instead the simple brute force way is to use find to generate a list of what you want to copy and feed that to rsync:

cd /data/prometheus/metrics2
rsync -a \
   $(find * -maxdepth 0 -mtime -365 -type d -print) \
   backupserv:/data/prometheus/metrics2/

As covered (more or less) in the Prometheus documentation on local storage, the block directories in your TSDB are frozen after a final 31-day compaction, and conveniently their final modification time is when that last 31-day compaction happened. The find with '-maxdepth 0' filters the command line arguments down to only things a year or less old; this catches the frozen block directories for the past year (and a bit), plus the chunks_head directory of the live block and the wal directory of the write-ahead log.

However, it also captures other block directories. Blocks initially cover two hours, but are then compacted down repeatedly until they eventually reach their final 31-day compaction. During this compaction process you'll have a series of intermediate blocks, each of which is a (sub)directory in your TSDB top level directory. Most of these intermediate block directories will be removed over time. Well, they'll be removed over time in your live TSDB; if you replicate your TSDB over to your backupserv the way I have, there's nothing that's going to remove them on your backup. These directories for intermediate blocks will continue to be there in your backup, taking up space and containing duplicate data (which may cause Prometheus to be unhappy with your overall TSDB if you ever have to use this backup copy).

This can also affect you if you repeatedly rsync your entire TSDB without using '--delete'. Fortunately I believe I used 'rsync -a --delete' when moving our TSDB over.

The somewhat simple and relatively obviously correct approach to dealing with this is to send over a list of the directories that should exist to the backup server, and have something on the backup server remove any directories not listed. You'd want to make very sure that you've sent and received the entire list, so that you don't accidentally remove actually desired bits of your backups.

The more tricky approach would be to have rsync do the deletion as part of the transfer. Instead of selectively transferring named directories on the command line, you'd build an rsync filter file that only included directories that were the right age to be transferred, and then use that filter as you transferred the entire TSDB directory with rsync's --delete-excluded argument. This would automatically clean up both 31-day block directories that were now too old and young block directories that had been compacted away.

(You'd still determine the directories to be included with find, but you'd have to do more processing with the result. You could also look for directories that were too old, and set up an rsync filter that excluded them.)

I'm not sure what approach we'll use. I may want to prototype both and see which feels more dangerous. The non-rsync approach feels safer, because I can at least have the remote end audit what it's going to delete for things that are clearly wrong, like deleting a directory that's old enough that it should be a frozen, permanent one.

(Possibly this makes rsync the wrong replication tool for what I'm trying to do here. I don't have much exposure to alternates, though; rsync is so dominant in this niche.)

sysadmin/PrometheusMovingTSDBWithRsyncII written at 22:56:47; Add Comment

2023-02-01

C was not created as an abstract machine (of course)

Today on the Fediverse I saw a post by @nytpu:

Reminder that the C spec specifies an abstract virtual machine; it's just that it's not an interpreted VM *in typical implementations* (i.e. not all, I know there was a JIT-ing C compiler at some point), and C was lucky enough to have contemporary CPUs and executable/library formats and operating systems(…) designed with its VM in mind

(There have also been actual C interpreters, some of which had strict adherence to the abstract semantics, cf (available online in the Usenix summer 1988 proceedings).)

This is simultaneously true and false. It's absolutely true that the semantics of formal standard C are defined in terms of an abstract (virtual) machine, instead of any physical machine. The determined refusal of the specification to tie this abstract machine in concrete CPUs is the source of a significant amount of frustration in people who would like, for example, for there to be some semantics attached to what happens when you dereference an invalid pointer. They note that actual CPUs running C code all have defined semantics, so why can't C? But, well, as is frequently said, C Is Not a Low-level Language (via) and the semantics of C don't correspond exactly to CPU semantics. So I agree with nytpu's overall sentiments, as I understand them.

However, it's absolutely false that C was merely 'lucky' that contemporary CPUs, OSes, and so on were designed with its abstract model in mind. Because the truth is the concrete C implementations came first and the standard came afterward (and I expect nytpu knows this and was making a point in their post). Although the ANSI C standardization effort did invent some things, for the most part C was what I've called a documentation standard, where people wrote down what was already happening. C was shaped by the CPUs it started on (and then somewhat shaped again by the ones it was eagerly ported to), Unix was shaped by C, and by the time that the C standard was producing drafts in the mid to late 1980s, C was shaping CPUs through the movement for performance-focused RISC CPUs (which wanted to optimize performance in significant part for Unix programs written in C, although they also cared about Fortran and so on).

(It's also not the case that C only succeeded in environments that were designed for it. In fact C succeeded in at least one OS environment that was relatively hostile to it and that wanted to be used with an entirely different language.)

Although I'm not absolutely sure, I suspect that the C standard defining it in abstract terms was in part either enabled or forced by the wide variety of environments that C already ran in by the late 1980s. Defining abstract semantics avoided the awkward issue of blessing any particular set of concrete ones, which at the time would have advantaged some people while disadvantaging others. This need for compromise between highly disparate (C) environments is what brought us charming things like trigraphs and a decision not to require two's-complement integer semantics (it's been proposed to change this, and trigraphs are gone in C23, also).

Dating from when ANSI C was defined and C compilers became increasingly aggressive about optimizing around 'undefined behavior' (even if this created security holes), you could say that modern software and probably CPUs has been shaped by the abstract C machine. Obviously, software increasingly has to avoid doing things that will blow your foot off in the model of the C abstract machine, because your C compiler will probably arrange to blow your foot off in practice on your concrete CPU. Meanwhile, things that aren't allowed by the abstract machine are probably not generated very much by actual C compilers, and things that aren't generated by C compilers don't get as much love from CPU architects as things that do.

(This neat picture is complicated by the awkward fact that many CPUs probably runs significantly more C++ code than true C code, since so many significant programs are written in the former instead of the latter.)

It's my view that recognizing that C comes from running on concrete CPUs and was strongly shaped by concrete environments (OS, executable and library formats, etc) matters for understanding the group of C users who are unhappy with aggressively optimizing C compilers that follow the letter of the C standard and its abstract machine. Those origins of C were there first, and it's not irrational for people used to them to feel upset when the C abstract machine creates a security vulnerability in their previously working software because the compiler is very clever. The C abstract machine is not a carefully invented thing that people then built implementations of, an end in and of itself; it started out as a neutral explanation and justification of how actual existing C things behaved, a means to an end.

programming/CAsAbstractMachine written at 23:18:30; Add Comment

2023-01-31

I've had bad luck with transparent hugepages on my Linux machines

Normally, pages of virtual memory are a relatively small size, such as 4 Kbytes. Hugepages (also) are a CPU and Linux kernel feature which allows programs to selectively have much larger pages, which generally improves their performance. Transparent hugepage support is an additional Linux kernel feature where programs can be more or less transparently set up with hugepages if it looks like this will be useful for them. This sounds good but generally I haven't had the best of luck with them:

It appears to have been '0' days since Linux kernel (transparent) hugepages have dragged one of my systems into the mud for mysterious reasons. Is my memory too fragmented? Who knows, all I can really do is turn hugepages off.

(Yes they have some performance benefit when they work, but they're having a major performance issue now.)

This time around, the symptom was that Go's self-tests were timing out while I was trying to build it (or in some runs, the build itself would stall). While this was going on, top said that the 'khugepaged' kernel daemon process was constantly running (on a single CPU).

(I'm fairly sure I've seen this sort of 'khugepaged at 100% and things stalling' behavior before, partly because when I saw top I immediately assumed THP were the problem, but I can't remember details.)

One of the issues that can cause problems with hugepages is that to have huge pages, you need huge areas of contiguous RAM. These aren't always available, and not having them is one of the reasons for kernel page allocation failures. To get these areas of contiguous RAM, the modern Linux kernel uses (potentially) proactive compaction, which is normally visible as the 'kcompactd0' kernel daemon. Once you have aligned contiguous RAM that's suitable for use as huge pages, the kernel needs to turn runs of ordinary sized pages into hugepages. This is the job of khugepaged; to quote:

Unless THP is completely disabled, there is [a] khugepaged daemon that scans memory and collapses sequences of basic pages into huge pages.

In the normal default kernel settings, this only happens for processes that use the madvise(2) system call to tell the kernel that a mmap()'d area of theirs is suitable for this. Go can do this under some circumstances, although I'm not sure what they are exactly (the direct code that does it is deep inside the Go runtime).

If you look over the Internet, there are plenty of reports of khugepaged using all of a CPU, often with responsiveness problems to go along with it. Sometimes this stops if people quit and restart some application; at other times, people resort to disabling transparent hugepages or rebooting their systems. No one seems to have identified a cause, or figured out what's going on to cause the khugepaged CPU usage or system slowness (presumably the two are related, perhaps through lock contention or memory thrashing).

Disabling THP is done through sysfs:

echo never >/sys/kernel/mm/transparent_hugepage/enabled

The next time around I may try to limit THP's 'defragmentation' efforts:

echo never >/sys/kernel/mm/transparent_hugepage/defrag

(The normal settings for both of these these days are 'madvise'.)

If I'm understanding the documentation correctly, this will only use a hugepage if one is available at the time that the program calls madvise(); it won't try to get one later and swap it in.

(Looking at the documentation makes me wonder if Go and khugepaged were both fighting back and forth trying to obtain hugepages when Go made a madvise() call to enable hugepages for some regions.)

I believe I've only really noticed this behavior on my desktops, which are unusual in that I use ZFS on Linux on them. ZFS has its own memory handling (the 'ARC'), and historically has had some odd and uncomfortable interaction with the normal Linux kernel memory system. Still, it doesn't seem to be just me who has khugepaged problems.

(I don't think we've seen these issues on our ZFS fileservers, but then we don't run anything else on the fileservers. They sit there handling NFS in the kernel and that's about it. Well, there is one exception these days in our IMAP server, but I'm not sure it runs anything that uses madvise() to try to use hugepages.)

linux/TransparentHugepagesBadLuck written at 23:04:27; Add Comment

2023-01-30

One reason I still prefer BIOS MBR booting over UEFI

Over on the Fediverse I said something I want to elaborate on:

One of the reasons I still prefer BIOS MBR booting over UEFI is that UEFI firmware is almost always clever and the failure mode of clever is 💥. I dislike surprises and explosions in my boot process.

Old fashioned BIOS MBR booting is very simplistic but it's also very predictable; pretty much the only variable in the process is which disk the BIOS will pick as your boot drive. Once that drive is chosen, you'll know exactly what will get booted and how. The MBR boot block will load the rest of your bootloader (possibly in a multi-step process) and then your bootloader will load and boot your Unix. If you have your bootloader completely installed and configured, this process is extremely reliable.

(Loading and booting your Unix is possibly less so, but that's more amenable to tweaking and also understandable in its own way.)

In theory UEFI firmware is supposed to be predictable too. But even in theory it has more moving parts, with various firmware variables that control and thus change the process (see efibootmgr and UEFI boot entries). If something changes these variables in a way you don't expect, you're getting a surprise, and as a corollary you need to inspect the state of these variables (and find what they refer to) in order to understand what your system will do. In practice, UEFI firmware in the field at least used to do weird and unpredictable things, such as search around on plausible EFI System Partitions, find anything that looked bootable, 'helpfully' set up UEFI boot entries for them, and then boot one of them. This is quite creative and also quite unpredictable. What will this sort of UEFI firmware do if part of the EFI System Partition gets corrupted? Your guess is as good as mine, and I don't like guessing about the boot process.

(There's a wide variety of corruptions and surprises you can have with UEFI. For example, are you sure which disk your UEFI firmware is loading your bootloader from, if you have more than one?)

In theory UEFI could simplify your life by letting you directly boot Unix kernels. In practice you want a bootloader even on UEFI, or at least historically you did and I doubt that the core issues have changed recently since Windows also uses a bootloader (which means that there's no pressure on UEFI firmware vendors to make things like frequent updates to EFI variables work).

It's possible that UEFI firmware and the tools to interact with it will evolve to the point where it's solid, reliable, predictable, and easy to deal with. But I don't think we're there yet, not even on servers. And it's hard to see how UEFI can ever get as straightforward as BIOS MBR booting, because some of the complexity is baked into the design of UEFI (such as UEFI boot entries and using an EFI System Partition with a real filesystem that can get corrupted).

sysadmin/BIOSMBRBootingOverUEFI written at 22:40:48; Add Comment

2023-01-29

The CPU architectural question of what is a (reserved) NOP

I recently wrote about an instruction oddity in the PowerPC 64-bit architecture, where a number of or instructions with no effects were reused to signal hardware thread priority to the CPU. This came up when Go accidentally used one of those instructions for its own purposes and accidentally lowered the priority of the hardware thread. One of the reactions I've seen has been a suggestion that people should consider all unofficial NOPs (ie, NOPs other than the officially documented ones) to be reserved by the architecture. However, this raises a practical philosophical question, namely what's considered a NOP.

In the old days, CPU architectures might define an explicit NOP instruction that was specially recognized by the CPU, such as the 6502's NOP. Modern CPUs generally don't have a specific NOP instruction in this way; instead, the architecture has a significant number of instructions that have no effects (for various reasons including of the regularity of instruction sets) and one or a few of those instructions is blessed as the official NOP and may be specially treated by CPUs. The PowerPC 64-bit official NOP is 'or r1, r1, 0', for example (which theoretically OR's register r1 with 0 and puts the result back into r1).

Update: I made a mistake here; the official NOP uses register r0, not r1, so 'or r0, r0, 0', sometimes written 'ori 0, 0, 0'.

So if you say that all unofficial NOPs are reserved and should be avoided, you have to define what exactly a 'NOP' is in your architecture. One aggressive definition you could adopt is that any instruction that always has no effects is a NOP; this would make quite a lot of instructions NOPs and thus unofficial NOPs. This gives the architecture maximum freedom for the future but also means that all code generation for your architecture needs to carefully avoid accidentally generating an instruction with no effects, even if it naturally falls out by accident through the structure of that program's code generation (which could be a simple JIT engine).

Alternately, you could say that (only) all variants of your standard NOP are reserved; for PowerPC 64-bit, this could be all or instructions that match the pattern of either 'or rX, rX, rX' or 'or rX, rX, 0' (let's assume the immediate is always the third argument). This leaves the future CPU designer with fewer no-effect operations they can use to signal things to the CPU, but makes the life of code generators simpler because there are fewer instructions they have to screen out as special exceptions. If you wanted to you could include some other related types of instructions as well, for example to say that 'xor rX, rX, 0' is also a reserved unofficial NOP.

A CPU architecture can pick whichever answer it wants to here, but I hope I've convinced my readers that there's more than one answer here (and that there are tradeoffs).

PS: Another way to put this is that when an architecture makes some number of otherwise valid instructions into 'unofficial NOPs' that you must avoid, it's reducing the regularity of the architecture in practice. We know that the less regular the architecture is, the more annoying it can be to generate code for.

tech/WhatIsAModernNOP written at 22:23:44; Add Comment

(Previous 10 or go back to January 2023 at 2023/01/28)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.