Wandering Thoughts archives

2016-07-29

My surprise problem with considering a new PC: actually building it

Earlier this week I had a real scare with my home machine, where I woke up to find it shut off and staying that way (amidst a distinct odor of burnt electronics). Fortunately this turned out not to be a dead power supply or motherboard but instead a near miss where a power connector had shorted out dramatically; once I got that dealt with, the machine powered on and hasn't had problems since. Still, it got me thinking.

Unlike many people, I don't have a collection of laptops, secondary machines, and older hardware that I can press into service in an emergency; my current home machine is pretty much it. And it's coming up on five years old. On the one hand, I already decided I didn't really want to replace it just now (and also); while I had some upgrade thoughts, they're much more modest. On the other hand, all of a sudden I would like to have a real, viable alternative if my home machine suffers another hardware failure, and buying a new current machine no longer feels quite so crazy in light of this.

So I've been thinking a bit about getting a new PC, which has opened up the surprising issue of where I'd get it from. I'm never been someone to buy stock pre-built machines (whether from big vendors like Dell or just the white box builds from small stores), but at the same time I've never built a machine myself; all of my previous machines have been assembled from a parts list by local PC stores. Local PC stores which seem to have now all evaporated, rather to my surprise.

(There used to be a whole collection of little PC stores around the university that sold parts and put machines together. Over the past few years they seem to have all quietly closed up shop, or at least relocated to somewhere else. I suspect that one reason is that probably a lot fewer students are buying desktops these days.)

One logical solution is to take a deep breath and just assemble the machine myself. I know (or at least read) plenty of people who do this and don't particularly have problems; in fact I'm probably unusual in being into computers yet never having done this rite of passage myself. I've also heard that modern PCs are really fairly easy for the hobbyist to assemble (especially if you stay away from things like liquid cooling). However, I don't really like dealing with hardware all that much, plus you don't get to restore hardware from backups if you screw it up. Spending a few hours nervously screwing things together is not really my idea of fun.

(And having someone else sell me a preassembled machine means that they're on the hook for dealing with any DOA parts, however unlikely that may be with modern hardware.)

There are probably still places around Toronto that do built to order PCs like this. But 'around Toronto' is a big area, plus another advantage of dealing with stores around the university was that we could tap local expertise to find out who did a good job of it and who you kind of wanted to avoid.

If I was in the US, another option would be to order a prebuilt machine from a company that specializes in Linux hardware and has something with suitable specifications. I'm not particularly attached to having fine control over the parts list; I just want a good quality machine that will run Linux well and has enough drive bays. I'm not sure there's anyone doing this in Canada, though, and I certainly don't want to ship across the border. (Just shipping within Canada is enough of a hassle.)

Although part of me wants to take the plunge into assembling my own machine from parts, what I'm probably going to do to start with is ask around the university to see if people have places they like for this sort of thing. My impression is that custom built PCs are much less popular than they used to be (my co-workers just got Dell desktops in our most recent sysadmin hardware refresh, for example), but I'm sure that people still buy some. If I'm lucky, there's still a good local store that does this and I can move on to thinking about what collection of hardware I'd want.

(Of course thinking about a new machine makes me irritated about ECC, which I'll probably have to live without.)

PCBuildingProblem written at 01:00:13; Add Comment

2016-07-24

My view on people who are assholes on the Internet

A long time ago, I hung around Usenet in various places. One of the things that certain Usenet groups featured was what we would today call trolls; people who were deliberately nasty and obnoxious. Sometimes they were nasty in their own newsgroups; sometimes they took gleeful joy in going out to troll other newsgroups full of innocents. Back in those days there were also sometimes gatherings of Usenet people so you could get to meet and know your fellow posters. One of the consistent themes that came out of these meetups was reports of 'oh, you know that guy? he's actually really nice and quiet in person, nothing like his persona on the net'. And in general, one of the things that some of these people said when they were called on their behavior was that they were just playing an asshole on Usenet; they weren't a real asshole, honest.

Back in those days I was younger and more foolish, and so I often at least partially bought into these excuses and reports. These days I have changed my views. Here, let me summarize them:

Even if you're only playing an asshole on the net, you're still an asshole.

It's simple. 'Playing an asshole on the net' requires being an asshole to people on the net, which is 'being an asshole' even if you're selective about it. Being a selective asshole, someone who's nasty to some people and nice to others, doesn't somehow magically make you not an asshole, although it may make you more pleasant for some people to deal with (and means that they can close their eyes to your behavior in other venues). It's certainly nicer to be an asshole only some of the time than all of the time, but it's even better if you're not an asshole at all.

This is not a new idea, of course. It's long been said that the true measure of someone's character is how they deal with people like waitresses and cashiers; if they're nasty to those people, they've got a streak of nastiness inside that may come out in other times and places. The Internet just provides another venue for that sort of thing.

In general, it's long since past time that we stopped pretending that people on the Internet aren't real people. What happens on the net is real to the people that it happens to, and nasty words hurt even if one can mostly brush off a certain amount of nasty words from strangers.

(See also, which is relevant to shoving nastiness in front of people on the grounds that they were in 'public'.)

InternetAssholes written at 00:03:35; Add Comment

2016-07-17

DNS resolution cannot be segmented (and what I mean by that)

Many protocols involve some sort of namespace for resources. For example, in DNS this is names to be resolved and in HTTP, this is URLs (and distinct hosts). One of the questions you can ask about such protocols is this:

When a request enters a particular part of the namespace, can handling it ever require the server to go back outside that part of the namespace?

If the answer is 'no, handling the request can never escape', let's say that the protocol can be segmented. You can divide the namespace up into segments, have different segments handled by a different servers, and each server only ever deals with its own area; it will never have to reach over to part of the namespace that's really handled by another server.

General DNS resolution for clients cannot be segmented this way, even if you only consider the answers that have to be returned to clients and ignore NS records and associated issues. The culprit is CNAME records, which both jump to arbitrary bits of the DNS namespace and force that information to be returned to clients. In a way, CNAME records act similarly to symlinks in Unix filesystems. The overall Unix filesystem is normally segmented (for example at mount points), but symlinks escape that; they mean that looking at /a/b/c/d can actually wind up in /x/y/z.

(NS records can force outside lookups but they don't have to be returned to clients, so you can sort of pretend that their information doesn't exist.)

Contrasting DNS with HTTP is interesting here. HTTP has redirects, which are its equivalent of CNAMEs and symlinks, but it still can be segmented because it explicitly pushes responsibility for handling the jump between segments all the way back to the original client. It's as if resolving DNS servers just returned the CNAME and left it up to client libraries to issue a new DNS request for information on the CNAME's destination.

(HTTP servers can opt to handle some redirects internally, but even then there are HTTP redirects which must be handled by the client. Clients don't ever get to slack on this, which means that servers can count on clients supporting redirects. Well, usually.)

I think this protocol design decision makes sense for DNS, especially at the time that DNS was created, but I'm not going to try to justify it here.

DNSResolutionIsNotSegmented written at 01:03:58; Add Comment

2016-06-18

It's easier to shrink RAID disk volumes than to reshape them

Once your storage system is using more than a single disk to create a pool of storage, there are a number of operations that you can want to do in order to restructure that pool of storage. Two of them are shrinking and reshaping. It's common for volume managers and modern filesystems like btrfs to be able to shrink storage pool by removing a disk (or a set of mirrored disks), although not all modern filesystems support doing this. It's also becoming increasingly common for RAID (sub)systems to support reshaping RAID pools to do things like change from RAID-5 to RAID-6 (or vice versa); modern filesystems may also implement this sort of reshaping if they support RAID levels that can use it. Often shrinking and reshaping are lumped together as 'yeah, we support reorganizing storage in general'.

In thinking about this whole area lately, I've realized that shrinking is fundamentally easier to do than reshaping because of what it involves at a mechanical level. When you shrink a pool of storage, you do so by moving data to a new place; you move it from disk A, which you are getting rid of, to free space on other disks. When all the data has been moved off of disk A, you're done. By contrast, reshaping is almost always an in-place operation. You don't copy all the data to an entirely different set of disks, then copy it back in a different arrangement; instead you must very carefully shuffle it around in place, keeping exacting records of what has and hasn't been shuffled so you know how to refer to it.

For obvious reasons, filesystems et al already have plenty of code for allocating, writing, and freeing blocks. To implement shrinking, 'all' you need is an allocation policy that says 'never allocate on this entity' plus something that walks over the entire storage tree, finds anything allocated on the to-be-removed disk, triggers a re-allocation and re-write, and then updates bits of the tree appropriately. The tree walker is not trivial, but because all of this mimics what the filesystem is already doing you have natural answers for many questions about things like concurrent access by ordinary system activity, handling crashes and interruptions, and so on. Fundamentally, the whole thing is always in a normal and consistent state; it just has less and less of your data on the to-be-removed disk over time.

This is not true for reshaping. Very few storage systems do anything like a RAID reshaping during normal steady state operation. This means you need a whole new set of code, you're going to have to be very careful to manage things like crash resistance, and a pool of storage that's in the middle of a reshaping looks very different from how it does in normal operation (which means that you can't just abandon a reshaping in mid-progress in the way you can abandon a shrink).

(This is a pretty obvious thing if you think about it, but I hadn't really considered it before now.)

PS: Not all 'shrinking' is actually shrinking in the form I'm considering here. Removing one disk from a RAID-5 or RAID-6 pool of storage is really a RAID reshape.

(It's theoretically possible to design a modern filesystem where RAID reshapes proceed like shrinking. I don't think anyone has done so, although maybe this is how btrfs works.)

VolumeShrinkingVsReshaping written at 02:52:27; Add Comment

2016-06-06

My views of Windows 10 (from the outside)

This sort of starts with my tweet:

It's funny; I have a great deal of anger for what Microsoft has done with Windows 10, even though none of it affects me directly.

Since I was asked, I'm going to change my mind and write enough here to explain myself.

Based on information casually available to an outsider to the Windows ecosystem, Microsoft has done two things with Windows 10. First, they have significantly to drastically increased the amount of privacy invasive 'telemetry' that Windows 10 installs send to Microsoft and have also added all sorts of advertising to it. The normal versions of Windows 10 will pitch you in the Start menu, on lock screens, in included Windows applications like Solitaire, and so on.

Second, as everyone has heard by now, Microsoft has been aggressively pushing the upgrade to Windows 10 on people (or more accurately Windows machines). At this point it seems to be almost impossible to escape the upgrade; certainly it requires so many contortions that many people will be upgraded even if they don't want to be. Stories abound about important PCs in various places basically being hijacked by these forced upgrades.

All by themselves, either of these things would be bad and obnoxious; no one wants ads, invasive telemetry, or forced upgrades. Together they ascend to an entirely new level of nastiness, as Microsoft is forcing you to upgrade to an intrusive, ad-laden new operating system (and they've made it clear that the amount of ads will be increasing over time). The whole thing also comes at what could politely be called a bad time for both ads and privacy intrusion; people are becoming more and more sensitized and angry about both, as we see with the popularity of adblockers and so on.

In my view, what Microsoft has done is to reveal that as long as you use a Microsoft operating system, your computer really belongs to Microsoft instead of you. By forcing this upgrade to an OS with very different behavior for advertising and privacy intrusion, Microsoft has now demonstrated that they are willing to drastically change the terms on which they let you use your computer, as they see fit. Your computer and OS does not exist to benefit you, it exists to benefit Microsoft. If it is not doing enough for them, they will change things until it does and you do not get a vote in the matter.

(Microsoft could try to sell more telemetry as better for you, but that is absolutely impossible with ads. Ads universally make your experience worse. By including and then increasing ads in Windows 10, Microsoft is clearly prioritizing themselves over you in the operating system.)

In my view, by doing this Microsoft has shown that they are not particularly different from the big OEMs who have for years been loading down Windows laptops and desktops with pre-installed crapware. Dell, HP, Lenovo, et al have all been more than willing to ruin the experience for people buying their hardware in order to make some additional money from other channels; now Microsoft has joined the crowd. As a result, Microsoft is just as un-trustworthy as the big OEMs are.

(More fundamentally, Microsoft is showing that they do not care about people's experience of using their operating systems, or at least that they don't consider it a priority. Microsoft will happily make your time using Windows 10 less pleasant in order to deliver some ads. And as you know, when you are clearly not the customer, you are the product. It is especially offensive to be the product when you are paying for the privilege, but apparently that is life in Microsoft's world.)

I very much hope that this winds up causing Microsoft massive problems down the road. There certainly should be consequences to changing your product from a premium thing that was the best solution to a downmarket option used by people who don't have the money to avoid the annoyances it inflicts on them. However, I cynically doubt that it will, and it may be that Microsoft Windows has already become the downmarket product that Windows 10 positions it as.

In the mean time the whole situation makes me angry every time I consider it, especially when I think of the various relatives and people I know who will have no choice but to use Windows 10 and be subjected to all of this. If Microsoft goes down in flames someday, this move of theirs has made sure that I will applaud the fires.

Sidebar: The danger of intrusive telemetry

The ever more intrusive (default) telemetry makes me especially angry because if there is one thing we have learned over the past three, five, or ten years, it is that collecting and retaining data is inherently dangerous. Once that data exists it becomes a magnet for people who want a look at it, whether that is with subpoenas in civil lawsuits, warrants in criminal cases, or NSLs from three letter agencies. Today, the only safe thing to do with data is not collect it at all or at the very least, totally minimize your collection. That Microsoft has chosen to do otherwise basically amounts to them shrugging their shoulders over the fundamental privacy of people using their operating system.

(Now we know how much Microsoft really cares about the privacy of people using their systems, as opposed to things that cause inconvenience or bad PR for Microsoft.)

Windows10MyViews written at 23:19:58; Add Comment

2016-05-31

Understanding the modern view of security

David Magda wrote a good and interesting question in a comment on my entry on the browser security dilemma:

I'm not sure why they can't have an about:config item called something like "DoNotBlameFirefox" (akin to Sendmail's idea).

There is a direct answer to this question (and I sort of wrote it in my comment), but the larger answer is that there has been a broad change in the consensus view of (computer) security. Browsers are a microcosm of this shift and also make a great illustration of it.

In the beginning, the view of security was that your job was to create a system that could be operated securely (often but not always it was secure by default) and give it to people. Where the system ran into problems or operating issues, it would tell people and give them options for what to do next. In the beginning, the diagnostics when something went wrong were terrible (which is a serious problem), but after a while people worked on making them better, clearer, and more understandable by normal people. If people chose to override the security precautions or operate the systems in insecure ways, well, that was their decision and their problem; you trusted people to know what they were doing and your hands were clean if they didn't. Let us call this model the 'Security 1' model.

(PGP is another poster child for the Security 1 model. It's certainly possible to use PGP securely, but it's also famously easy to screw it up in dozens of ways such that you're either insecure or you leak way more information than you intend to.)

The Security 1 model is completely consistent and logical and sound, and it can create solid security. However, like the 'Safety-I' model of safety, it has a serious problem: it not infrequently doesn't actually yield security in real world operation when it is challenged with real security failures. Even when provided with systems that are secure by default, people will often opt to operate them in insecure ways for reasons that make perfect sense to the people on the spot but which are catastrophic for security. Browser TLS security warnings have been ground zero for illustrating this; browser developers have experimentally determined that there is basically no level of strong warnings that will dissuade enough people from going forward to connect to what they think is eg Facebook. There are all sorts of reasons for this, including the vast prevalence of false positives in security alerts and the barrage of warning messages that we've trained people to click through because they're just in the way in the end.

The security failures of the resulting total system of 'human plus computer system' are in one sense not the fault of the designers of the computer system, any more than it is your fault if you provide people with a saw and careful instructions to use it only on wood and they occasionally saw their own limbs off despite your instructions, warnings, stubbornly attached limb guards, and so on. At the same time, the security failures are an entirely predictable failure of the total system. This has resulted in a major shift in thinking about security, which I will call 'Security 2'.

In Security 2 thinking, it is not good enough to have a secure system if people will wind up operating it insecurely. What matters and the goal that designers must focus on is making the total system operate securely, even in adverse conditions; another way to put this is that the security goal has become protecting people in the real world. As a result, a Security 2 focused designer shouldn't allow security overrides to exist if they know those overrides will wind up being (mis)used in a way that defeats the overall security of the system. It doesn't matter if the misuse is user error on the part of the people using the security system; the result is still an insecure total system and people getting owned and compromised, and the designer has failed.

Security 2 systems are designed not necessarily so much to be easy to use as to be hard or impossible to screw up in such a way that you get owned (although often this means making them easy to use too). For example, all the time, automatic end to end encryption of messages in an instant messaging system is a Security 2 feature; optional, must be selected or turned on by hand end to end encryption of messages is a Security 1 feature.

Part of the browser shift to a Security 2 mindset has been to increasingly disallow any and all ways to override core security precautions, including being willing to listen to websites over users when it comes to TLS failures. This is pretty much what I'd expect from a modern Security 2 design, given what we know about actual user behavior.

(The Security 2 mindset raises serious issues when it intersects with user control over their own devices and software, because it more or less inherently involves removing some of that control. For example, I cannot tell modern versions of Firefox to do my bidding over some TLS failures without rebuilding them from source with increasing amounts of hackery applied.)

UnderstandingModernSecurity written at 23:03:58; Add Comment

2016-05-29

What does 'success' mean for a research operating system?

Sometimes people talk about how successful (nor not successful) an operating system has been, when that operating system was created as a research project instead of a product. One of the issues here is that there are several different things that people can mean by a research OS being a success. In particular, I think that there are at least four sorts of it:

  • The OS actually works and thus serves as a proof of concept for the underlying ideas that motivated this particular research OS variation. What 'works' means may vary somewhat, since research projects rarely reach production status; generally you get some demos running acceptably fast.

    Having your research OS actually work is about the baseline definition of success. It means that your ideas don't conflict with each other, can be made to work acceptably, and don't require big compromises to be implemented.

  • The OS works well enough and is attractive enough that people in your research group can and do build things on it and actively use it. If it's a general purpose OS, people voluntarily and productively use it for everyday activity; if it's a specialized real time or whatever OS, people voluntarily build their own projects on top of it and have them work.

    A research OS that has reached this sort of success is more than just a technology demonstration and proving ground. It can do real things.

  • At least some of your OS's ideas are attractive enough that they get implemented in other OSes or at least clearly influence the development of other OSes. This is especially so if your ideas propagate to production OSes in some form or other (often in a somewhat modified and less pure form, because that's just how things go).

    (As anyone who's familiar with academic research knows, a lot of research is basically not particularly influential. Being influential means you've achieved more success than usual.)

  • Some form of your research OS winds up being used by outside people to do real work; it becomes a 'success' in the sense of 'it is out in the real world doing things'. Sometimes this is your OS relatively straight, sometimes it's a heavily adopted version of your work, and I'm sure that there have been cases where companies took the ideas and redid the implementation.

Most research OSes reach the first level of success, or at least most that you ever hear about (the research community rarely publishes negative results, among other issues). Or at least they reach the appearance of it; there may be all sorts of warts under the surface in practice in terms of performance, reliability, and so on. On the other hand some research OSes are genuine attempts to achieve genuinely usable, reliable, and performant results in order to demonstrate that their ideas are not merely possible but are actively practical.

It's quite rare for a research OS to reach the fourth level of success of making it into the real world. There are not many 'real world' OSes in the first place and there are very large practical obstacles in the way. To put it one way, there is a lot of non-research work involved in making something a product (even a free one).

(In general purpose OSes, I think only two research OSes have made a truly successful jump into the real world from the 1970s onwards, although it's probably been tried with a few more. I don't know enough about the real time and embedded computing worlds to have an idea there.)

SuccessForResearchOSes written at 01:15:23; Add Comment

2016-05-14

IPv6 is the future of the Internet

I say, have said, and will say a lot of negative things about IPv6 deployment and usability. I'm on record as believing that large scale IPv6 usage will cause lots of problems in the field, with all sorts of weird failures and broken software (and some software that is not broken as such but is IPv4 only), and that in practice lots of people will be very slow to update to IPv6 and there will be plenty of IPv4 only places for, oh, the next decade or more.

But let me say something explicitly: despite all that, I believe that IPv6 is the inevitable future of the Internet. IPv6 solves real problems, those problems are getting more acute over time, the deployment momentum is there, and and sooner or later people will upgrade. I don't have any idea of how soon this will happen ('not soon' is probably still a good bet), but over time it's clear that more and more traffic on the Internet will be IPv6, despite all of the warts and pain involved. The transition will be slow, but at this point I believe it's long since become inevitable.

(Whether different design and deployment decisions could have made it happen faster is an academically interesting question but probably not one that can really be answered today, although I have my doubts.)

This doesn't mean that I'm suddenly going to go all in on moving to IPv6. I still have all my old cautions and reluctance about that. I continue to think that the shift will be a bumpy road and I'm not eager to rush into it. But I do think that I should probably be working more on it than I currently am. I would like not to be on the trailing edge, and sooner or later there are going to be IPv6 only services that I want to use.

(IPv6 only websites and other services are probably inevitable but I don't know how soon we can expect them. Anything popular will probably be a sign of the trailing edge, but I wouldn't be surprised to see a certain sort of tech-oriented website go IPv6 only earlier than that as a way of making a point.)

As a result, I now feel that I should be working to move my software and my environment towards using IPv6, or at least being something that I can make IPv6 enabled. In part this means looking at programs and systems I'm using that are IPv4 only and considering what to do about them. Hopefully it will also mean making a conscious effort not to write IPv4 only code in the future, even if that code is easier.

(I would say 'old programs', but I have recently written something that's sort of implicitly IPv4 only because it contains embedded assumptions about eg doing DNS blocklist lookups.)

Probably I should attempt to embark on another program of learning about IPv6. I've tried that before, but it's proven to have the same issue for me as learning computer languages; without an actual concrete project, I just can't feel motivated about learning the intricacies of IPv6 DHCP and route discovery and this and that and the other. But probably I can look into DNS blocklists in the world of IPv6 and similar things; I do have a project that could use that knowledge.

IPv6IsTheFuture written at 00:20:40; Add Comment

2016-05-08

Issues in fair share scheduling of RAM via resident set sizes

Yesterday I talked about how fair share allocation of things needs a dynamic situation and how memory was not necessarily all that dynamic and flow-based. One possible approach to fair share allocation of memory is to do it on Resident Set Size. If you look at things from the right angle, RSS is sort of a flow in that the kernel and user programs already push it back and forth dynamically.

(Let's ignore all of the complications introduced on modern systems by memory sharing.)

While there has been some work on various fair share approaches to RSS, I think that one issue limiting the appeal here is that significantly constraining RSS often has significant undesirable side effects. Every program has a 'natural' RSS, which is the RSS at which it only infrequently or rarely has to ask for something that's been removed from its set of active memory. If you clamp a program's RSS below this value (and actually evict things from RAM), the program will start trying to page memory back in at a steadily increasing rate. Eventually you can clamp the program's RSS so low that it makes very little forward progress in between all of the page-ins of things it needs.

Up until very recently, all of this page-in activity had another serious effect: it ate up a lot of IO bandwidth to your disks. More exactly, it tended to eat up your very limited random IO capacity, since these sort of page-ins are often random IO. So if you pushed a program into having a small enough RSS, the resulting paging would kill the ability of pretty much all programs to get IO done. This wasn't technically swap death, but it might as well have been. To escape this, the kernel probably needs to limit not just the RSS but also the paging rate; a program that was paging heavily would wind up going to sleep more and more of the time in order to keep its load impact down.

(These days a SSD based system might have enough IO bandwidth and IOPS to not care about this.)

All of this is doable but it's also complicated, and it doesn't get you the sort of more or less obviously right results that fair share CPU scheduling does. I suspect that this has made fair share RSS allocation much less attractive than simpler things like CPU scheduling.

FairShareRSSProblems written at 01:15:58; Add Comment

2016-05-07

'Fair share' scheduling pretty much requires a dynamic situation

When I was writing about fair share scheduling with systemd the other day, I rambled in passing about how I wished Linux had fair share memory allocation. Considering what fair share memory allocation would involve set off a cascade of actual thought, and so today I have what is probably an obvious observation.

In general what we mean by 'fair share' scheduling or allocation is something where your share of a resource is not statically assigned but is instead dynamically determined based on how much other people also want. Rather than saying that you get, say, 25% of the network bandwidth, we say that you get 1/Nth of it where N is how many consumers want network bandwidth. Fair share scheduling is attractive both because it's 'fair' (no one gets to over-consume a resource), it doesn't require setting hard caps or allocations in advance, and it responds to usage on the fly.

But given this, fair share scheduling really needs to be about something dynamic, something that can easily be adjusted on the fly from moment to moment and where current decisions are in no way permanent. Put another way, fair share scheduling wants to be about dividing up flows; the flow of CPU time, the flow of disk bandwidth, the flow of network bandwidth, and so on. Flows are easy to adjust; you (the fair share allocator) just give the consumers more or less this time around. If more consumers show up, the part of the flow that everyone gets becomes smaller; if consumers go away, the remaining ones get a larger share of the flow. The dynamic nature of the resource (or of the use of the resource) means that you can always easily reconsider and change how much of it the consumers get.

If you don't have something that's dynamic like this, well, I don't think that fair share scheduling or allocation is going to be very practical. If adjusting current allocations is hard or ineffective (or even just slow), you can't respond very well to consumers coming and going and thus the 'fair share' of the resource changing.

The bad news here is pretty simple: memory is not very much of a flow. Nor is, say, disk space. With relatively little dynamic flow nature to allocations of these things, they don't strike me as things where fair share scheduling is going to be very successful.

FairShareAllocationAndFlows written at 01:27:40; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.