Wandering Thoughts archives

2015-08-25

One view on practical blockers for IPv6 adoption

I recently wound up reading Russ White's Engineering Lessons, IPv6 Edition (via), which is yet another meditation by a network engineer about why people haven't exactly been adopting IPv6 at a rapid pace. Near the end, I ran across the following:

For those who weren't in the industry those many years ago, there were several drivers behind IPv6 beyond just the need for more address space. [...]

Part of the reason it's taken so long to deploy IPv6, I think, is because it's not just about expanding the address space. IPv6, for various reasons, has tried to address every potential failing ever found in IPv4.

As a sysadmin, my reaction to this is roughly 'oh god yes'. One of the major pain points in adding IPv6 (never mind moving to it) is that so much has to be changed and modified and (re)learned. IPv6 is not just another network address for our servers (and another set of routes); it comes with a whole new collection of services and operational issues and new ways of operating our networks. There are a whole host of uncertainties, from address assignment (both static and dynamic) on upwards. Given that right now IPv6 is merely nice to have, you can guess what this does to IPv6's priority around here.

Many of these new things exist primarily because the IPv6 people decided to solve all of their problems with IPv4 at once. I think there's an argument that this was always likely to be a mistake, but beyond that it's certainly made everyone's life more complicated. I don't know for sure that IPv6 adoption would be further along if IPv6 was mostly some enlarged address fields, but I rather suspect that it would be. Certainly I would be happier to be experimenting with it if that was the case.

What I can boil this down to is the unsurprisingly news that large scale, large scope changes are hard. They require a lot of work and time, they are difficult for many people to test, and they are unusually risky if something goes wrong. And in a world of fragile complexity, their complexity and complex interactions with your existing environment are not exactly confidence boosters. There are a lot of dark and surprising corners where nasty things may be waiting for you. Why go there until you absolutely have to?

(All of this applies to existing IPv4 environments. If you're building something up from scratch, well, going dual stack from the start strikes me as a reasonably good idea even if you're probably going to wind up moving slower than you might otherwise. But green field network development is not the environment I live in; it's rather the reverse.)

sysadmin/IPv6BigChangeProblem written at 00:50:07; Add Comment

2015-08-24

PS/2 to USB converters are complex things with interesting faults

My favorite keyboard and mice are PS/2 ones, and of course fewer and fewer PCs come with PS/2 ports (especially two of them). The obvious solution is PS/2 to USB converters, so I recently got one at work; half as an experiment, half as stockpiling against future needs. Unfortunately it turned out to have a flaw, but it's an interesting flaw.

The flaw was that if I held down CapsLock (which I remap to Control) and then hit some letter keys, the converter injected a nonexistent CapsLock key-up event into the event stream. The effect was that I got a sequence like '^Cccc'. This didn't happen with the real Control keys on my keyboard, only with CapsLock, and it doesn't happen with CapsLock when the keyboard is directly connected to my machine as a PS/2 keyboard. Unfortunately this is behavior that I reflexively count on working, so this PS/2 to USB converter is unsuitable for me.

(Someone else tested the same brand of converter on another PS/2 keyboard and saw the same thing, so this is not specific to my particular make of keyboards. For the curious, this converter was a ByteCC BT-2000.)

What this really says to me is two things. The first is that PS/2 to USB converters are actually complex items, no matter how small and innocuous they seem. Going from PS/2 to USB requires protocol conversion and when you do protocol conversion you can have bugs and issues. Clearly PS/2 to USB converters are not generic items; I'm probably going to have to search for one that not just 'works' according to most reports but that actually behaves correctly, and such a thing may not be easy to find.

(I suspect that such converters are actually little CPUs with firmware, rather than completely fixed ASICs. Little CPUs are everywhere these days.)

The second is the depressing idea that there are probably PS/2 keyboards out there that actively require this handling of CapsLock. Since it doesn't happen with the Control keys, it's not a generic bug with handling held modifier keys; instead it's specific behavior for CapsLock. People generally don't put in special oddball behavior for something unless they think they need to, and usually they've got reasons to believe this.

(For obvious reasons, if you have a PS/2 to USB converter that works and doesn't do this, I'd love to hear about it. I suspect that the ByteCC will not be the only one that behaves this way.)

tech/PS2ToUSBInterestingIssue written at 01:22:59; Add Comment

2015-08-23

I think you should get your TLS configuration advice from Mozilla

If you decide that you care about having good TLS support in, say, a web server and look around, there are a lot of places that will tell you all about what configuration you should have in order to be secure and widely available and so on. Old ones live on in their dusty now-inaccuracy (TLS configuration advice has a half life of six months at most) and new ones spring up every so often. Many of them contradict each other in whole or in part. The whole thing is one of the frustrations of good TLS in practice.

Given this, I've wound up with the strong opinion that you should be getting your TLS configuration advice from the Mozilla server side TLS configuration guide. It's certainly become my primary source of configuration guidelines and I've been happy with the results.

(Other worthwhile resources are the Mozilla web server config generator and the Qualys SSL Server Test. Note that I've seen some people disagree with the SSL server test's scoring of some things.)

The advantage of Mozilla's guide isn't just that it seems to be good advice. It has two important virtues beyond that, virtues that I feel make it trustworthy. First, it's actively maintained by people who know what they're doing. Second, it's such a visible and public resource that I think any bad advice it has is very likely to produce reactions from knowledgeable outsiders. Some random person writing an article with bad TLS advice is yawn worthy; there might be a little snark on Twitter but that's probably it. Mozilla getting it wrong? You're very likely to hear a lot of noise about that.

Other TLS configuration advice may be perfectly good, well maintained, and written by people who know what they're doing (although my experience leads me to believe that it often isn't). But as an outsider it's much harder to tell if this is the case and to spot if (and when) it stops being so, which makes using the advice potentially dangerous.

web/GetTLSConfigsFromMozilla written at 00:04:12; Add Comment

2015-08-21

What surprised me about the Python assignment puzzle

Yesterday I wrote about a Python assignment puzzle and how it worked, but I forgot to write about what was surprising about it for me. The original puzzle is:

(a, b) = a[b] = {}, 5

The head-scratching bit for me was the middle, including the whole question of 'how does this even work'. So the real surprise here for me is that in serial assignments, Python processes the assignments left to right.

The reason this was a big surprise is due to what was my broad mental model of serial assignment, which comes from C. In C, assignment is an expression that yields the value assigned (ie the value of 'a = 2' is 2). So in C and languages like this, serial assignment is a series of assignment expressions that happen right to left; you start out with the actual expression producing a value, you do the rightmost assignment which yields the value again, and you ripple leftwards. So a serial assignment groups like this:

a = (b = (c = (d = <expression>)))

Python doesn't work this way, of course; assignment is not an expression and doesn't produce a value. But I was still thinking of serial assignment as proceeding right to left by natural default and was surprised to learn that Python has chosen to do it in the other order. There's nothing wrong with this and it's perfectly sensible; it's just a decision that was exactly opposite from what I had in my mind.

(Looking back, I assumed in this entry that Python's serial assignment order was right to left without bothering to look it up.)

How did my misapprehension linger for so long? Well, partly it's that I don't use serial assignment very much in Python; in fact, I don't think anyone does much of it and I have the vague impression that it's not considered good style. But it's also that it's quite rare for the assignment order to actually matter, so you may not discover a mistaken belief about it for a very long time. This puzzle is a deliberately perverse exercise where it very much does matter, as the leftmost assignment actively sets up the variables that the next assignment then uses.

python/AssignmentPuzzleSurprise written at 21:58:55; Add Comment

What's going on with a Python assignment puzzle

Via @chneukirchen, I ran across this tweet:

Armin just came up with this puzzle, how well do you know obscure Python details? What's a after this statement?:
(a, b) = a[b] = {}, 5

This is best run interactively for maximum head-scratching. I had to run it in an interpreter myself and then think for a while, because there are several interesting Python things going on here.

Let's start by removing the middle assignment. That gives us:

(a, b) = {}, 5

This is Python's multiple variable assignment ('x, y = 10, 20') written to make the sequence nature of the variable names explicit (hence why the Python tutorial calls this 'sequence unpacking'). Writing the list of variables as an explicit tuple (or list) is optional but is something even I've done sometimes, although I think writing it this way has fallen out of favour. Thus it's equivalent to:

t = ({}, 5)
(a, b) = t

The next trick is that (somewhat to my surprise) when you're assigning a tuple to several variables at once (as 'x = y = 10') and doing sequence unpacking for one of those assignments, Python doesn't require you to do sequence unpacking for every assignment. The following is valid:

(a, b) = x = t

Here a and b become the individual elements of the tuple t while x is the whole tuple. I suppose this is a useful trick to remember if you sometimes want both the tuple and its elements for different purposes.

The next trick happening is that Python explicitly handles repeated variable assignment (sometimes called 'chained assignment' or 'serial assignment') in left to right order. So first the leftmost set of assignments are handled, and second the next leftmost, and so on. Here we only have two sets of assignments, so the entire statement is equivalent to the much more verbose form:

t = ({}, 5)
(a, b) = t
a[b] = t

(When you do this outside of a function, the first (leftmost) assignment also creates a and b as names, which means that the second (right) assignment then has them available to use and doesn't get a 'name is not defined' error.)

The final 'trick' is due to what variables mean in Python, which creates the recursion in a[b]'s value. The tuple t that winds up assigned to a[b] contains a reference to the dictionary that a becomes another reference to, which means that the tuple contains a dictionary that contains the tuple again and it's recursion all the way down.

(When you combine Python's name binding behavior with serial assignment like this, you can wind up with fun bugs.)

python/AssignmentPuzzleUnpacked written at 01:57:27; Add Comment

2015-08-20

Using abstract namespace Unix domain sockets and SO_PEERCRED in Python

Linux has a special version of Unix domain sockets where the socket address is not a socket file in the filesystem but instead in an abstract namespace. It's possible to use them from Python without particular problems, including checking permissions with SO_PEERCRED, but it's not completely obvious how.

(For general information on using Unix domain sockets from Python, see UnixDomainSockets.)

With a normal Unix domain socket, the address you give is the path to a socket file. Per the Linux unix(7) manpage, an abstract socket address is simply your abstract name with a 0 byte on the front. This is trivial in Python and works exactly as you'd hope:

import socket
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.bind("\0" + sname)
s.listen(10)
# or s.connect(...) to talk to a server
....

This works in both Python 2 and Python 3. Somewhat to my surprise, Python 3 converts the Unicode null 'byte' codepoint to a 0 byte without complaints. How Python 3 converts any non-ASCII in sname to bytes depends on your locale, as usual, which means that under some circumstances you may need to do explicit conversion to bytes and handle conversion errors. You can call .bind() or .connect() with a bytes address instead of a Unicode one.

Sockets in the abstract namespace have no permissions, unlike regular Unix domain sockets (which are protected by file and/or directory permissions). If you want to add a permissions system, you can obtain the UID, GID, and PID of the other end with SO_PEERCRED like so:

import struct
SO_PEERCRED = getattr(socket, "SO_PEERCRED", 17)
creds = s.getsockopt(socket.SOL_SOCKET, SO_PEERCRED, struct.calcsize("3i"))
pid, uid, gid = struct.unpack("3i", creds)

This comes from a 2011 Stackoverflow answer, more or less (I have added my own little modifications to it).

The situation with the definition for SO_PEERCRED turns out to be a little bit complicated. The Python 3 socket module has had a definition for it for some time (it looks like since 2011 or so). Most versions of Python 2.x don't have a SO_PEERCRED constant defined in the socket module; the exception is the Fedora version of Python, which apparently has had this patched in for a very long time now. In addition, the '17' here is only correct on mainstream Linux architectures; some oddball ones like MIPS have other values. You may have to check in Python 3 or compile a little C program to get the correct value. Yes, this is irritating and you can see why the Fedora people patched Python (and why it got added to Python 3).

As you might suspect, SO_PEERCRED can be used by either end of a Unix domain socket connection (and it works on any Unix domain socket, not just ones in the abstract namespace). It's merely most useful for a server to find out what the client is, since clients usually trust servers.

(Trusting the server may or may not be wise when you're dealing with Unix domain sockets in the abstract namespace, since anyone can grab any name in it. For my purposes I don't really care; my use is a petty little hack on my own personal machine and it doesn't involve anything sensitive.)

python/AbstractUnixSocketsAndPeercred written at 01:19:05; Add Comment

2015-08-19

Linux's abstract namespace for Unix domain sockets

The address of ordinary Unix domain sockets for servers is the file name of a socket file that actually appears in the filesystem. This is pleasantly Unix-y on the surface but winds up requiring you to do a bunch of bureaucracy to manage these socket files, and the socket files by themselves don't actually do anything that would make it useful for them to be in the filesystem; you can't interact with them and the server behind them with normal Unix file tools, for example.

Linux offers you a second choice. Rather than dealing with socket files in the filesystem, you can use names in an abstract (socket) namespace. Each name must be unique, but the namespace is otherwise flat and unstructured, and you can call your server socket whatever you want. Conveniently and unlike socket files, abstract names vanish when the socket is closed (either by you or because your program exited).

Apart from being Linux-only, the abstract socket namespace suffers from two limitations: you have to find a way to get a unique name and it has no permissions. With regular socket files you can use regular Unix file and directory permissions to insure that only you can talk to your server socket. With abstract socket names, anyone who knows or can find the name can connect to your server. If this matters you will have to do access control yourself.

(One approach is to use getsockopt() with SO_PEERCRED to get the UID and so on of the client connecting to you. SO_PEERCRED is Linux specific as far as I know, but then so is the abstract socket namespace.)

Lsof and other tools conventionally represent socket names in the abstract socket namespace by putting an @ in front of them. This is not actually how they're specified at the C API level, but it's a distinct marker and some higher level tools follow it for, eg, specifying socket names.

(The Go net package is one such piece of software.)

As far as picking unique names goes, one trick many programs seem to use is to use whatever filename they would be using if they didn't have the abstract socket namespace available. This gives you a convenient way of expressing, eg, per-user sockets; you can just give it a name based on the user's home directory. Other programs use a hierarchical namespace of their own; Ubuntu's upstart listens on the abstract socket name '/com/ubuntu/upstart', for example.

(For personal hacks, you can of course just make up your own little short names. Little hacks don't need a big process; that's the whole attraction of the abstract namespace.)

Now that I've poked around this, I'm going to use it for future little Linux-only hacks because checking permissions (if it's even necessary) is a lot more convenient than the whole hassle of dealing with socket files. For things I write that are intended to be portable, I don't see much point; portable code has to deal with socket files so I might as well use regular Unix domain socket names and socket files all the time.

(A bunch of my personal hacks are de facto Linux only because my desktop machines are Linux. I'll regret that laziness if I ever try to move my desktop environment to FreeBSD or the like, but that seems highly unlikely at the moment.)

linux/SocketAbstractNamespace written at 02:17:35; Add Comment

2015-08-17

Getting dd's skip and seek straight once and for all

Earlier today I wanted to lightly damage a disk in a test ZFS pool in order to make sure that some of our status monitoring code was working right when ZFS was recovering from checksum failures. The reason I wanted to do light damage is that under normal circumstances, if you do too much damage to a disk, ZFS declares the disk bad and ejects it from your pool entirely; I didn't want this to happen.

So I did something like this:

for i in $(seq 128 256 10240); do
    dd if=/dev/urandom of=<disk> bs=128k count=4 skip=$i
done

The intent was to poke 512 KB of random data into the disk at a number of different places, with the goal of both hopefully overwriting space that was actually in use and not overwriting too much of it. This turned out to actually not do very much and I spent some time scratching my head before the penny dropped.

I've used skip before and honestly, I wasn't thinking clearly here. What I actually wanted to use was seek. The difference is this:

skip skips over initial data in the input, while seek skips over initial data in the output.

(Technically I think skip usually silently consumes the initial input data you asked it to skip over, although dd may try to lseek() on inputs that seem to support it. seek definitely must lseek() and dd will error out if you ask it to seek on something that doesn't support lseek(), like a pipe.)

What I was really doing with my dd command was throwing away increasing amounts of data from /dev/urandom and then repeatedly writing 512 KB (of random data) over the start of the disk. This was nowhere near what I intended and certainly didn't have the effects on ZFS that I wanted.

I guess the way for me to remember this is 'skip initial data from the input, seek over space in the output'. Hopefully it will stick after this experience in toe stubbing.

Sidebar: the other thing I initially did wrong

The test pool was full of test files, which I had created by copying /dev/zero into files. My initial dd was also using /dev/zero to overwrite disk blocks. It struck me that I was likely to be mostly overwriting file data blocks full of zeroes with more zeroes, which probably wasn't going to cause checksum failures.

unix/DdSkipVersusSeek written at 22:34:17; Add Comment

Why languages like 'declare before use' for variables and functions

I've been reading my way through Lisp as the Maxwell's equations of software and ran into this 'problems for the author' note:

As a general point about programming language design it seems like it would often be helpful to be able to define procedures in terms of other procedures which have not yet been defined. Which languages make this possible, and which do not? What advantages does it bring for a programming language to be able to do this? Are there any disadvantages?

(I'm going to take 'defined' here as actually meaning 'declared'.)

To people with certain backgrounds (myself included), this question has a fairly straightforward set of answers. So here's my version of why many languages require you to declare things before you use them. We'll come at it from the other side, by asking what your language can't do if it allows you to use things before declaring them.

(As a digression, we're going to assume that we have what I'll call an unambiguous language, one where you don't need to know what things are declared as in order to know what a bit of code actually means. Not all languages are unambiguous; for example C is not (also). If you have an ambiguous language, it absolutely requires 'declare before use' because you can't understand things otherwise.)

To start off, you lose the ability to report a bunch of errors at the time you're looking at a piece of code. Consider:

lvar = ....
res = thang(a, b, lver, 0)

In basically all languages, we can't report the lver for lvar typo (we have to assume that lver is an unknown global variable), we don't know if thang is being called with the right number of arguments, and we don't even know if thang is a function instead of, say, a global variable. Or if it even exists; maybe it's a typo for thing. We can only find these things out when all valid identifiers must have been declared; in fully dynamic languages like Lisp and Python, that's 'at the moment where we reach this line of code during execution'. In other languages we might be able to emit error messages only at the end of compiling the source file, or even when we try to build the final program and find missing or wrong-typed symbols.

In languages with typed variables and arguments, we don't know if the arguments to thang() are the right types and if thang() returns a type that is compatible with res. Again we'll only be able to tell when we have all identifiers available. If we want to do this checking before runtime, the compiler (or linker) will have to keep track of the information involved for all of these pending checks so that it can check things and report errors once thang() is defined.

Some typed languages have features for what is called 'implicit typing', where you don't have to explicitly declare the types of some things if the language can deduce them from context. We've been assuming that res is pre-declared as some type, but in an implicit typing language you could write something like:

res := thang(a, b, lver, 0)
res = res + 20

At this point, if thang() is undeclared, the type of res is also unknown. This will ripple through to any code that uses res, for example the following line here; is that line valid, or is res perhaps a complex structure that can in no way have 10 added to it? We can't tell until later, perhaps much later.

In a language with typed variables and implicit conversions between some types, we don't know what type conversions we might need in either the call (to convert some of the arguments) or the return (to convert thang()'s result into res's type). Note that in particular we may not know what type the constant 0 is. Even languages without implicit type conversions often treat constants as being implicitly converted into whatever concrete numeric type they need to be in any particular context. In other words, thang()'s last argument might be a float, a double, a 64-bit unsigned integer, a 32-bit signed integer, or whatever, and the language will convert the 0 to it. But it can only know what conversion to do once thang() is declared and the types of its arguments are known.

This means that a language with any implicit conversions at all (even for constants like 0) can't actually generate machine code for this section until thang() is declared even under the best of circumstances. However, life is usually much worse for code generation than this. For a start, most modern architectures pass and return floating point values in different ways than integer values, and they may pass and return more complex values in a third way. Since we don't know what type thang() returns (and we may not know what types the arguments are either, cf lver), we basically can't generate any concrete machine code for this function call at the time we parse it even without implicit conversions. The best we can do is generate something extremely abstract with lots of blanks to be filled in later and then sit on it until we know more about thang(), lver, and so on.

(And implicit typing for res will probably force a ripple effect of abstraction on code generation for the rest of the function, if it doesn't prevent it entirely.)

This 'extremely abstract' code generation is in fact what things like Python bytecode are. Unless the bytecode generator can prove certain things about the source code it's processing, what you get is quite generic and thus slow (because it must defer a lot of these decisions to runtime, along with checks like 'do we have the right number of arguments').

So far we've been talking about thang() as a simple function call. But there are a bunch of more complicated cases, like:

res = obj.method(a, b, lver, 0)
res2 = obj1 + obj2

Here we have method calls and operator overloading. If obj, obj1, and/or obj2 are undeclared or untyped at this point, we don't know if these operations are valid (the actual obj might not have a method() method) or what concrete code to generate. We need to generate either abstract code with blanks to be filled in later or code that will do all of the work at runtime via some sort of introspection (or both, cf Python bytecode).

All of this prepares us to answer the question about what sort of languages require 'declare before use': languages that want to do good error reporting or (immediately) compile to machine code or both without large amounts of heartburn. As a pragmatic matter, most statically typed languages require declare before use because it's simpler; such languages either want to generate high quality machine code or at least have up-front assurances about type correctness, so they basically fall into one or both of those categories.

(You can technically have a statically typed language with up-front assurances about type correctness but without declare before use; the compiler just has to do a lot more work and it may well wind up emitting a pile of errors at the end of compilation when it can say for sure that lver isn't defined and you're calling thang() with the wrong number and type of arguments and so on. In practice language designers basically don't do that to compiler writers.)

Conversely, dynamic languages without static typing generally don't require declare before use. Often the language is so dynamic that there is no point. Carefully checking the call to thang() at the time we encounter it in the source code is not entirely useful if the thang function can be completely redefined (or deleted) by the time that code gets run, which is the case in languages like Lisp and Python.

(In fact, given that thang can be redefined by the time the code is executed we can't even really error out if the arguments are wrong at the time when we first see the code. Such a thing would be perfectly legal Python, for example, although you really shouldn't do that.)

programming/WhyDeclareBeforeUse written at 01:03:28; Add Comment

2015-08-16

My irritation with Intel's CPU segmentation (and why it probably exists)

I'd like the CPU in my next machine to have ECC RAM, for all sorts of good reasons. I'm also more or less set on using Intel CPUs, because as far as I know they're still on top in terms of performance and power efficiency. As I've written about before, this leaves me with the problem that only some Intel CPUs and chipsets actually support ECC.

(It appears that Intel will actually give you a straightforward list of CPUs here, which is progress from the bad old days. Desktop chipsets with ECC support are listed here, and there's always the Wikipedia page.)

One way to describe what Intel is doing here is market segmentation. Want ECC? You'll pay more. Except it's not that simple, because what's missing ECC support in CPUs is the middle models, especially the attractive and relatively inexpensive ones in the i5 and to a lesser extent the i7 line (there are some high-end i7s with ECC support); at the low end there's a number of inexpensive i3s with ECC support, including recent ones. This is market segmentation with a twist.

What I assume is going on is that Intel is zealously protecting the server CPU and chipset market by keeping server makers from building servers that use attractive midrange desktop CPUs and chipsets. These CPUs provide quite a decent amount of performance, CPU cores, and so on, but because they're aimed at the midrange market they sell for not all that much compared to 'server' CPUs (and the bleeding edge of desktop CPUs), which means that Intel makes a lot less from your server. So Intel deliberately excludes ECC support from these models to make them less attractive on servers, where customers are more likely to insist on it and be willing to pay more. Similarly Intel keeps ECC support out of many 'desktop' chipsets so that they don't turn into de facto server chipsets.

(Intel could try to keep CPUs and chipsets out of servers by limiting how much memory they support, and to a certain extent Intel does. The problem for Intel is that desktop users long ago started demanding enough memory for many servers.)

At the same time, Intel supports ECC in lower-end CPUs and chipsets because there's also a market for low-cost and relatively low performance servers; sometimes you just want a 1U server with some CPU and RAM and disk for some undemanding purpose. This market would be just as happy to use AMD CPUs and AMD certainly has relatively low performance CPUs to offer (and I believe they have ECC; if not, they certainly could if AMD saw a market opening). So if you're happy with a two-core i3 in your server or even an Atom CPU, well, Intel will sell you one with ECC support (and for cheap).

However much I understand this market segmentation, it obviously irritates me because I fall exactly into that midrange CPU segment. I don't want the expensive (and generally hot) high end CPUs, but I also want more than just the 2-core i3 level of performance. Since Intel is not about to give up free money, this is where I wish that they had more competition in the form of AMD doing better at making attractive midrange CPUs (with ECC).

(I think that Intel having more widespread ECC support in CPUs and chipsets would lead to motherboard companies supporting it on their motherboards, but I could be wrong.)

tech/IntelCPUSegmentationIrritation written at 02:29:23; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.