Wandering Thoughts


Your exposure from retaining Let's Encrypt account keys

In a comment on my entry on how I think you have lots of Let's Encrypt accounts, Aristotle Pagaltzis asked a good question:

Taking this logic to its logical conclusion: as long as you can arrange to prove your control of a domain under some ACME challenge at any time, should you not immediately delete an account after obtaining a certificate through it?

(Granted – in practice, there is the small matter that deleting accounts appears unimplemented, as per your other entry…)

Let's take the last bit first: for security purposes, it's sufficient to destroy your account's private key. This leaves dangling registration data on Let's Encrypt's servers, but that's not your problem; with your private key destroyed, no one can use your authorized account to get any further certificates.

(If they can, either you or the entire world of cryptography have much bigger problems.)

For the broader issue: yes, in theory it's somewhat more secure to immediately destroy your private key the moment you have successfully obtained a certificate. However, there is a limit to how much security you get this way because someone with unrestricted access to your machine can get their own authorization for it with an account of their own. If I have root access to your machine and you normally run a Let's Encryption authorization process from it, I can just use my own client to do that same and get my own authorized account. I can then take the private key off the machine and later use it to get my own certificates for your machine.

(I can also reuse an account I already have and merely pass the authorization check, but in practice I might as well get a new account to go with it.)

The real exposure for existing authorized accounts is when it's easier to get at the account's private key than it is to get unrestricted access to the machine itself. If you keep the key on the machine and only accessible to root, well, I won't say you have no additional exposure at all, but in practice your exposure is probably fairly low; there are a lot of reasonably sensitive secrets that are protected this way and we don't consider it a problem (machine SSH host keys, for example). So in my opinion your real exposure starts going up when you transport the account key off the machine, for example to reuse the same account on multiple machines or over machine reinstalls.

As a compromise you might want to destroy account keys every so often, say once a year or every six months. This limits your long-term exposure to quietly compromised keys while not filling up Let's Encrypt's database with too many accounts.

As a corollary to this and the available Let's Encrypt challenge methods, someone who has compromised your DNS infrastructure can obtain their own Let's Encrypt authorizations (for any account) for any arbitrary host in your domain. If they issue a certificate for it immediately you can detect this through certificate transparency monitoring, but if they sit on their authorization for a while I don't think you can tell. As far as I know, LE provides no way to report on accounts that are authorized for things in your domain (or any domain), so you can't monitor this in advance of certificates being issued.

For some organizations, compromising your DNS infrastructure is about as difficult as getting general root access (this is roughly the case for us). However, for people who use outside DNS providers, such a compromise may only require gaining access to one of your authorized API keys for their services. And if you have some system that allows people to add arbitrary TXT records to your DNS with relatively little access control, congratulations, you now have a pretty big exposure there.

sysadmin/LetsEncryptAccountExposure written at 01:22:08; Add Comment


An odd and persistent year old phish spammer

We have a number of more or less internal mailing lists for things like mailing all of the technical staff. They have at least somewhat unusual names and don't appear in things like email directories or most users' address books. Back almost a year ago (21st April 2016), one of them got a phish spam:

From codewizard@approject.com [...]
Received: from [] (helo=approject.com) [...]
From: "Capital One 360" <codewizard@approject.com>
Subject: Your Capital one 360 Account Urgent Login Reminder


(With an attached PDF.)

Slightly over a month later, the same address got another one:

From codewizard@approject.com [...]
Received: from [] (helo=approject.com) [...]
From: "USAA SECURITY" <codewizard@approject.com>
Subject: Your Account Log-on Reminder

A week later it got a third one, with the same MAIL FROM (and EHLO), but from a different IP address yet again. Then a fourth two weeks later.

At this point I'd had enough, so I threw the MAIL FROM of codewizard@approject.com into the per-address server side email blocks for this particular address. You can probably guess what has happened periodically ever since then:

2017-03-23 18:11:31 H=(approject.com) [] F=<codewizard@approject.com> rejected RCPT <redacted>: blocked by personal senders blacklist.

(As I write this, that IP address is on the Spamhaus CSS.)

It's clear that whatever is doing this spamming is widely dispersed, very persistent, and is using a basically unique address list that it has a death grip on (this internal mailing list of ours hasn't started getting other sorts of spam, just this one phish spammer). Maybe this is wandering malware that is now operating more or less autonomously (like some do), or maybe this is someone running a long-term campaign who cannot be bothered to disguise the distinctive signatures here (those being the envelope sender and the EHLO).

(This isn't the first time I've seen spammer persistence illustrated, but I think it's the first time it's clearly a single spammer or spam agent instead of address lists being shared and reshared endlessly.)

PS: Since various aspects of this phish spam have apparently mutated over time, it's probably not autonomous malware in action but instead someone running a long-term campaign. I don't know why they're so fixated on using this very distinctive MAIL FROM, but it's certainly handy so please don't change, whoever you are.

spam/PersistentPhishSpammer written at 22:25:02; Add Comment


ARM servers had better just work if vendors want to sell very many

A few years ago I wrote about the cost challenge facing hypothetical future ARM servers here; to attract our interest, they'd have to be cheaper or better than x86 servers in some way that we cared about. At the time I made what turns out to be a big assumption: I assumed that ARM servers would be like x86 servers in that they would all just work with Linux. Courtesy of Pete Zaitcev's Standards for ARM computers and Linaro and the follow-on Standards for ARM computers in 2017, I've now learned that this was a pretty optimistic assumption. The state of play in 2017 is that LWN can write an article called Making distributions Just Work on ARM servers that describes not current reality but an aspirational future that may perhaps arrive some day.

Well, you know, no wonder no one is trying to actually sell real ARM servers. They're useless to a lot of people right now. Certainly they'd be useless to us, because we don't want to buy servers with a bespoke boatloader that probably only works with one specific Linux distribution (the one the vendor has qualified) and may not work in the future. A basic prerequisite for us being interested in ARM servers is that they be as useful for generic Linux as x86 servers are (modulo which distributions have ARM versions at all). If we have to buy one sort of servers to run Ubuntu but another sort to run CentOS, well, no. We'll buy x86 servers instead because they're generic and we can even run OpenBSD on them.

There are undoubtedly people who work at a scale with a server density where things like the power advantages of ARM might be attractive enough to overcome this. These people might even be willing to fund their own bootloader and distribution work. But I think that there are a lot of people who are in our situation; we wouldn't mind extra-cheap servers to run Linux, but we aren't all that interested in buying servers that might as well be emblazoned 'Ubuntu 16.04 only' or 'CentOS only' or the like.

I guess this means I can tune out all talk of ARM servers for Linux for the next few years. If the BIOS-level standards for ARM servers for Linux are only being created now, it'll be at least that long until there's real hardware implementing workable versions of them that isn't on the bleeding edge. I wouldn't be surprised if it takes half a decade before we get ARM servers that are basically plug and play with your choice of a variety of Linux distributions.

(I don't blame ARM or anyone for this situation, even though it sort of boggles me. Sure, it's not a great one, but the mere fact that it exists means that ARM vendors haven't particularly cared about the server market so far (and may still not). It's hard to blame people for not catering to a market that they don't care about, especially when we might not care about it either when the dust settles.)

tech/ArmServersHaveToJustWork written at 23:14:50; Add Comment


Setting the root login's 'full name' to identify the machine that sent email

Yesterday I wrote about making sure you can identify what machine sent you a status email, and the comments Sotiris Tsimbonis shared a brilliant yet simple solution to this problem:

We change the gecos info for this purpose.

chfn -f "$HOSTNAME root" root

Take it from me; this is beautiful genius (so much so that both we and another group here immediately adopted it). It's so simple yet still extremely effective, because almost everything that sends email does so using programs like mail that will fill out the From: header using the login's GECOS full name from /etc/passwd. You get email that looks like:

From: root@<your-domain> (hebikera root)

This does exactly what we want by immediately showing the machine that the email is from. In fact many mail clients these days will show you only the 'real name' from the From: header by default, not the actual email address (I'm old-fashioned, so I see the traditional full From: header).

This likely works with any mail-sending program that doesn't require completely filled out email headers. It definitely works in the Postfix sendmail cover program for 'sendmail -t' (as well as the CentOS 6 and 7 mailx, which supplies the standard mail command).

(As an obvious corollary, you can also use this trick for any other machine-specific accounts that send email; just give them an appropriate GECOS 'full name' as well.)

There's two perhaps obvious cautions here. First, if you ever rename machines you have to remember to re-chfn the root login and any other such logins to have the correct hostname in them. It's probably worth creating an officially documented procedure for renaming machines, since there are other things you'll want to update as well (you might even script it). Second, if you have some sort of password synchronization system you need it to leave root's GECOS full name alone (although it can update root's password). Fortunately ours already does this.

sysadmin/IdentifyMachineEmailByRootName written at 23:49:10; Add Comment

Making sure you can identify what machine sent you a status email

I wrote before about making sure that system email works, so that machines can do important things like tell you that their RAID array has lost redundancy and you should do something about that. In a comment on that entry, -dsr- brought up an important point, which is you want to be able to easily tell which machine sent you email.

In an ideal world, everything on every machine that sends out email reports would put the machine's hostname in, say, the Subject: header. This would give you reports like:

Subject: SMART error (FailedOpenDevice) detected on host: urd

In the real world you also get helpful emails like this:

Subject: Health

Device: /dev/sdn [SAT], FAILED SMART self-check. BACK UP DATA NOW!

The only way for us to tell which machine this came from was to look at either the Received: headers or the Message-ID, which is annoying.

There are at least two ways to achieve this. The first approach is what -dsr- said in the comment, which is to make every machine send its email to a unique alias on your system. This unfortunately has at least two limitations. The first is that it somewhat clashes with a true 'null client' setup, where your machines dump absolutely all of their email on the server. A straightforward null client does no local rewriting of email at all, so to get this you need a smarter local mailer (and then you may need per-machine setup, hopefully automated). The second limitation is that there's no guarantee that all of the machine's email will be sent to root (and thus be subject to simple rewriting). It's at least likely, but machines have been known to send status email to all sorts of addresses.

(I'm going to assume that you can arrange for the unique destination alias to be visible in the To: header.)

You can somewhat get around this by doing some of the rewriting on your central mail handler machine (assuming that you can tell the machine email apart from regular user email, which you probably want to do anyways). This needs a relatively sophisticated configuration, but it probably can be done in something like Exim (which has quite powerful rewrite rules).

However, if you're going to do this sort of magic in your central mail handler machine, you might as well do somewhat different magic and alter the Subject: header of such email to include the host name. For instance, you might just add a general rule to your mailer so that all email from root that's going to root will have its Subject: altered to add the sending machine's hostname, eg 'Subject: [$HOSTNAME] ....'. Your central mail handler already knows what machine it received the email from (the information went into the Received header, for example). You could be more selective, for instance if you know that certain machines are problem sources (like the CentOS 7 machine that generated my second example) while others use software that already puts the hostname in (such as the Ubuntu machine that generated my first example).

I'm actually more attracted to the second approach than the first one. Sure, it's a big hammer and a bit crude, but it creates the easy to see marker of the source machine that I want (and it's a change we only have to make to one central machine). I'd feel differently if we routinely got status emails from various machines that we just filed away (in which case the alias-based approach would give us easy per-machine filing), but in practice our machines only email us occasionally and it's always going to be something that goes to our inboxes and probably needs to be dealt with.

sysadmin/IdentifyingStatusEmailSource written at 01:11:32; Add Comment


Modern Linux kernel memory allocation rules for higher-order page requests

Back in 2012 I wrote an entry on why our Ubuntu 10.04 server had a page allocation failure, despite apparently having a bunch of memory free. The answer boiled down to the the NFS code wanting to allocate a higher-order request of 64 Kb of (physically contiguous) memory and the kernel having some rather complicated and confusing rules for when this was permitted when memory was reasonably fragmented and low(-ish).

That was four and a half years ago, back in the days of kernel 3.5. Four years is a long time for the kernel. Today the kernel people are working on 4.11 and, unsurprisingly, things have changed around a bit in this area of code. The function involved is still called __zone_watermark_ok() in mm/page_alloc.c, but it is much simpler today. As far as I can tell from the code, the new general approach is nicely described by the function's current comment:

Return true if free base pages are above 'mark'. For high-order checks it will return true of the order-0 watermark is reached and there is at least one free page of a suitable size. Checking now avoids taking the zone lock to check in the allocation paths if no pages are free.

The 'order-0' watermark is the overall lowmem watermark (which I believe is low: from my old entry). This bounds all requests for obvious reasons; as the code says in a comment, if a request for a single page is not something that can go ahead, requests for more than one page certainly can't. Requests for order-0 pages merely have to pass this watermark; if they do, they get a page.

Requests for higher-order pages have to pass an obvious additional check, which is that there has to be a chunk of at least the required order that's still free. If you ask for a 64 Kb contiguous chunk, your request can't be satisfied unless there's at least one chunk of size 64 Kb or bigger left, but it's satisfied if there's even a single such chunk. Unlike in the past, as far as I can tell requests for higher-order pages can now consume all of those pages, possibly leaving only fragmented order-0 4 Kb pages free in the zone. There is no longer any attempt to have a (different) low water mark for higher-order allocations.

This change happened in late 2015, in commit 97a16fc82a; as far as I can tell it comes after kernel 4.3 and before kernel 4.4-rc1. I believe it's one commit in a series by Mel Gorman that reworks various aspects of kernel memory management in this area. His commit message has an interesting discussion of the history of high-order watermarks and why they're apparently not necessary any more.

(Certainly I'm happy to have this odd kernel memory allocation failure mode eliminated.)

Sidebar: Which distributions have this change

Ubuntu 16.04 LTS uses kernel '4.4.0' (plus many Ubuntu patches); it has this change, although with some Ubuntu modifications from the stock 4.4.0 code. Ubuntu 14.04 LTS has kernel 3.13.0 so it shouldn't have this change.

CentOS 7 is using a kernel labeled '3.10.0'. Unsurprisingly, it does not have this change and so should have the old behavior, although Red Hat has been known to patch their kernels so much that I can't be completely sure that they haven't done something here.

Debian Stable has kernel 3.16.39, and thus should also be using the old code and the old behavior. Debian Testing ('stretch') has kernel 4.9.13, so it should have this change and so the next Debian stable release will include it.

linux/ModernPageAllocRules written at 21:54:42; Add Comment

My theory on why Go's gofmt has wound up being accepted

In Three Months of Go (from a Haskeller's perspective) (via), Michael Walker makes the following observation in passing:

I do find it a little strange that gofmt has been completely accepted, whereas Python’s significant whitespace (which is there for exactly the same reason: enforcing readable code) has been much more contentious across the programming community.

As it happens, I have a theory about this: I think it's important that gofmt only has social force. By this I mean that you can write Go code in whatever style and indentation you want, and the Go compiler will accept it (in some styles you'll have to use more semicolons than in others). This is not the case in Python, where the language itself flatly insists that you use whitespace in roughly the correct way. In Go, the only thing 'forcing' you to put your code through gofmt is the social expectations of the Go community. This is a powerful force (especially when people learning Go also learn 'run your code through gofmt'), but it is a soft force as compared to the hard force of Python's language specification, and so I think people are more accepting of it. Many of the grumpy reactions to Python's indentation rules seem to be not because the formatting it imposes is bad but because people reflexively object to being forced to do it.

(This also means that Go looks more conventional as a programming language; it has explicit block delimiters, for example. I think that people often react to languages that look weird and unconventional.)

There is an important practical side effect of this that is worth noting, which is that your pre-gofmt code can be completely sloppy. You can just slap some code into the file with terrible indentation or no indentation at all, and gofmt will fix it all up for you. This is not the case in Python; because whitespace is part of the grammar, your Python code must have good indentation from the start and cannot be fixed later. This makes it easier to write Go code (and to write it in a wide variety of editors that don't necessarily have smart indentation support and so on).

The combination of these two gives the soft force of gofmt a great deal of power over the long term. It's quite convenient to be able to scribble sloppily formatted code down and then have gofmt make it all nice for you, but if you do this you must go along with gofmt's style choices even if you disagree with some of them. You can hold out and stick to your own style, but you're doing things the hard way as well as the socially disapproved way, and in my personal experience sooner or later it's not worth fighting Go's city hall any more. The lazy way wins out and gofmt notches up another quiet victory.

(It probably also matters that a number of editors have convenient gofmt integration. I wouldn't use it as a fixup tool as much as I do if I had to drop the file from my editor, run gofmt by hand, and then reload the now-changed file. And if it was less of a fixup tool, there would be less soft pressure of 'this is just the easiest way to fix up code formatting so it looks nice'; I'd be more likely to format my Go code 'correctly' in my editor to start with.)

programming/GoWhyGofmtAccepted written at 01:01:12; Add Comment


Using Firefox's userContent.css for website-specific fixes

In a comment on my entry on using pup to fix a Twitter issue, Chris Povirk suggested using Firefox's userContent.css feature to fix Twitter by forcing the relevant bit of Twitter's HTML to actually display. This is actually a pretty good idea, except for one problem; if you write normal CSS there, it will apply to all sites. Anywhere there is some bit of HTML that matches your CSS selector, Firefox will apply your userContent.css CSS fixup. In the case of this Twitter issue, this is probably reasonably safe because it's not likely that anyone else is going to use a CSS class of 'twitter-timeline-link', but in other cases it's not so safe and in general it makes me twitchy.

Luckily, it turns out that there is a way around this (as I found out when I did some research for this entry). In Firefox it's possible to write CSS that's restricted to a single site or even a single URL (among some other options), using a special Mozilla extension to CSS (it's apparently been proposed as a standard feature but delayed repeatedly). In fact if you use Stylish or some other extensions you've already seen this used, because Stylish relies on this Mozilla CSS feature to do its site-specific rules.

How you do this is with a @-moz-document CSS rule (see here for Mozilla's full list of their CSS extensions). The Stylish userstyles wiki has some examples of what you can do (and also), which goes all the way to regular expression matches (and also). If we want to restrict something to a domain, for example twitter.com, the matching operation we want is domain(twitter.com) or perhaps url-prefix(https://twitter.com/).

So in this particular case, the overall userContent.css I want is:

@-moz-document domain(twitter.com) {
  .twitter-timeline-link {
    display: inline !important;

(I'm willing to have this affect any Twitter subdomains as well as twitter.com itself.)

This appears to work on Twitter, and I'm prepared to believe that it doesn't affect any other website without bothering to try to construct a test case (partly because it certainly seems to work for things like Stylish). I don't know if using just userContent.css is going to have the memory leak I see with Stylish, but I guess I'm going to find out, since I've now put this Twitter fix in my brand new userContent.css file.

An extension like Stylish appears to have only a few advantages over just modifying userContent.css, but one of them is a large one; Stylish can add and change its CSS modifications on the fly, whereas with userContent.css you appear to have to quit and restart your Firefox session to to pick up changes. For me this is an acceptable tradeoff to avoid memory leaks, because in practice I modified my overrides only rarely even when I was using Stylish.

web/FirefoxPerSiteUserCSS written at 01:53:29; Add Comment


Part of why Python 3.5's await and async have some odd usage restrictions

Python 3.5 added a new system for coroutines and asynchronous programming, based around new async and await keywords (which have the technical details written up at length in PEP 492). Roughly speaking, in terms of coroutines implemented with yield from, await replaces 'yield from' (and is more powerful). So what's async for? Well, it marks a function that can use await. If you use await outside an async function, you'll get a syntax error. Functions marked async have some odd restrictions, too, such as that you can't use yield or yield from in them.

When I described doing coroutines with yield from here, I noted that it was potentially error prone because in order to make everything work you had to have an unbroken chain of yield from from top to bottom. Break the chain or use yield instead of yield from, and things wouldn't work. And because both yield from and yield are used for regular generators as well as coroutines, it's possible to slip up in various ways. Well, when you introduce new syntax you can fix issues like that, and that's part of why async and await have their odd rules.

A function marked async is a (native) coroutine. await can only be applied to coroutines, which means that you can't accidentally treat a generator like a coroutine the way you can with yield from. Simplifying slightly, coroutines can only be invoked through await; you can't call one or use them as a generator, for example as 'for something in coroutine(...):'. As part of not being generators, coroutines can't use 'yield' or 'yield from'.

(And there's only await, so you avoid the whole 'yield' verus 'yield from' confusion.)

In other words, coroutines can only be invoked from coroutines and they must be invoked using the exact mechanism that makes coroutines work (and that mechanism isn't and can't be used for or by anything else). The entire system is designed so that you're more or less forced to create that unbroken chain of awaits that makes it all go. Although Python itself won't error out on import time if you try to call a async function without await (it just won't work at runtime), there's probably Python static checkers that look for this. And in general it's an easy rule to keep track of; if it's async, you have to await it, and this status is marked right there in the function definition.

(Unfortunately it's not in the type of the function, which means that you can't tell by just importing the module interactively and then doing 'type(mod.func)'.)

Sidebar: The other reason you can only use await in async functions

Before Python 3.5, the following was completely valid code:

def somefunc(a1, b2):
   await = interval(a1, 10)
   otherfunc(b2, await)

In other words, await was not a reserved keyword and so could be legally used as the name of a local variable, or for that matter a function argument or a global.

Had Python 3.5 made await a keyword in all contexts, all such code would immediately have broken. That's not acceptable for a minor release, so Python needed some sort of workaround. So it's not that you can't use await outside of functions marked async; it's that it's not a keyword outside of async functions. Since it's not a keyword, writing something like 'await func(arg)' is a syntax error, just as 'abcdef func(arg)' would be.

The same is true of async, by the way:

def somefunc(a, b, async = False):
  if b == 10:
     async = True

Thus why it's a syntax error to use 'async for' or 'async with' outside of an async function; outside of such functions async isn't even a keyword so 'async for' is treated the same as 'abcdef for'.

(I'm sure this makes Python's parser that much more fun.)

python/AsyncAwaitRestrictionsWhy written at 01:11:54; Add Comment


Overcommitting virtual memory is perfectly sensible

Every so often people get quite grumpy about the fact that some systems allow you to allocate more virtual memory than the system can possibly provide if you actually try to use it all. Some of these people go so far as to say that this overcommitting is only being allowed because of all of those sloppy, lazy programmers who can't be bothered to properly manage memory, and so ask for (far) more memory than they'll actually use.

I'm afraid that these people are living in a dream world. I don't mean that in a pragmatic sense; I mean that in the sense that they are ignoring the fact that we pervasively trade memory for speed all through computing, from programming practices all the way down to CPUs. Over-use and over-allocation of memory is pervasive, and once you put enough of that together what you get is virtual memory that is allocated but never used. It's impossible to be comprehensive because once you start looking wasted space is everywhere, but here are some examples.

Essentially every mainstream memory allocator rounds up memory allocation sizes or otherwise wastes space. Slab allocation leaves space unused at the end of pages, for example, and can round the size you ask for up to a commonly used size to avoid having too many slabs. Most user-level allocators request space from the operating system in relatively large chunks in order to amortize the cost of expensive operating system calls, rather than go back to the OS every time they need another page's worth of memory, and very few of them do anything like release a single free page's worth of space back to the OS, again because of the overhead.

(There used to be memory allocators that essentially never returned space to the operating system. These days it's more common to give free space back eventually if enough of it accumulates in one spot, but on Unix that's because how we get memory from the operating system has changed in ways that make it more convenient to free it up again.)

It's extremely common for data structures like hash tables to not be sized minimally and to be resized in significant size jumps. We accept that a hash table normally needs a certain amount of empty space in order to keep performing well for insertions, and when entries are deleted from a large hash we often delay resizing the hash table down, even though we're once again wasting memory. Sometimes this is explicitly coded as part of the application; at other times it is hidden in standard language runtimes or generic support libraries.

Speaking of things in standard language runtimes, delayed garbage collection is of course a great way to have wasted memory and to have more memory allocated from the operating system than you actually need. Yet garbage collection is pervasively seen as a valid programming technique in large part because we've accepted a tradeoff of higher memory 'use' in exchange for faster and more reliable program development, although it can also be faster under some circumstances.

And finally there's object alignment in memory, which generally goes right down to the CPU to some degree. Yes, it's (only) bytes, but again we've shown that we're willing to trade memory for speed, and there are ripple effects as everything grows just a little bit bigger and fits just a little bit less well into memory pages and so on.

There are environments where people very carefully watch every little scrap of space and waste as little of it as possible, but they are not mainstream ones. In mainstream computing, trading memory for speed is a common and widely accepted thing; the only question is generally how much memory is worth trading for how much speed.

Sidebar: Memory waste and APIs

On Unix, one of the tradeoffs that has been made from the very start is that there are some simple, general, and flexible (and powerful) APIs that strictly speaking may commit the kernel to supplying massive amounts of memory. I am talking here primarily of fork() and especially the fork()/exec() model for starting new processes. I maintain that fork() is a good API even though it very strongly pushes you towards a non-strict handling of memory overcommit. In this it illustrates that there can be a tradeoff between memory and power even at the API level.

tech/OvercommittingMemoryIsSensible written at 01:27:38; Add Comment

(Previous 10 or go back to March 2017 at 2017/03/16)

Page tools: See As Normal.
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.