Wandering Thoughts archives

2018-05-20

'Minimal version selection' accepts that semantic versioning is fallible

Go has been quietly wrestling with package versioning for a long time. Recently, Russ Cox brought forward a proposal for package versioning; one of the novel things about it is what he calls 'minimal version selection', which I believe has been somewhat controversial.

In package management and versioning, the problem of version selection is the problem of what version of package dependencies you'll use. If your package depends on another package A, and you say your minimum version of A is 1.1.0, and package A is available in 1.0.0, 1.1.0, 1.1.5, 1.2.0, and 2.0.0, version selection is picking one of those versions. Most package systems will pick the highest version available within some set of semantic versioning constraints; generally this means either 1.1.5 or 1.2.0 (but not 2.0.0, because the major version change is assumed to mean API incompatibilities exist). In MVS, you short-circuit all of this by picking the minimum version allowed; here, you would pick 1.1.0.

People have had various reactions to MVS, but as a grumpy sysadmin my reaction is positive, for a simple reason. As I see it, MVS is a tacit acceptance that semantic versioning is not perfect and fails often enough that we can't blindly rely on it. Why do I say this? Well, that's straightforward. The original version number (our minimum requirement) is the best information we have about what version the package will definitely work with. Any scheme that advances the version number is relying on that new version to be sufficiently compatible with the original version that it can be substituted for it; in other words, it's counting on people to have completely reliably followed semantic versioning.

The reality of life is that this doesn't happen all of the time. Sometimes mistakes are made; sometimes people have a different understanding of what semantic versioning means because semantic versioning is ultimately a social thing, not a technical one. In an environment where semver is not infallible (ie, in the real world), MVS is our best option to reliably select package versions with the highest likelihood of working.

(Some package management systems arrange to also record one or more 'known to work' package version sets. I happen to think that MVS is more straightforward than such two-sided schemes for various reasons, including practical experience with some Rust stuff.)

I understand that MVS is not very aesthetic. People really want semver to work and to be able to transparently take advantage of it working (and I agree that it would be great if it did work). But as a grumpy sysadmin, I have seen a non-zero amount of semver not working in these situations, and I would rather have things that I can build reliably even if they are not using all of the latest sexy bits.

programming/FallibleSemverAndMVS written at 22:30:49; Add Comment

Modern CPU power usage varies unpredictably based on what you're doing

I have both an AMD machine and an Intel machine, both of them using comparable CPUs that are rated at 95 watts TDP (although that's misleading), and I gathered apples to apples power consumption numbers for them. In the process I discovered a number of anomalies in relative power usage between the two CPUs. As a result I've wound up with the obvious realization that modern CPUs have complicated and unpredictable power usage (in addition to all of the other complicated things about them).

In the old days, it was possible to have a relatively straightforward view of how CPU usage related to power draw, where all you really needed to care about was how many CPUs were in use and maybe whether it was integer or floating point code. Not only is that is clearly no longer the case, but what factors change the power usage vary from CPU model to CPU model. My power consumption numbers show one CPU to CPU anomaly right away, where an infinite loop in two shells has one shell using more power on a Ryzen 1800X and the other shell using more power on an i7-8700K. These two shells are running the same code on both CPUs and each shell's code is likely to be broadly similar to the other, but the CPUs are responding to it quite differently, especially when the code is running on all of the CPUs.

Beyond this anomaly, there is also that this simple 'infinite shell loop' power measurement showed a different (and higher) power usage than a simple integer loop in Go. I can make up theories for why, but it's clear that even if you restrict yourself to integer code, a simple artificial chunk of code may not have anywhere near the same power usage as more complex real code. The factors influencing this are unlikely to be simple, and they also clearly vary from CPU to CPU. 'Measure your real code' has always been good advice, but it clearly matters more than ever today if you care about power usage.

(The corollary of 'measure your real code' is probably that you have to measure real usage too; otherwise you may be running into something like my Bash versus rc effect. This may not be entirely easy, to put it one way.)

It's not news these days that floating point operations and especially the various SIMD instructions such as AVX and AVX-512 use more power than basic integer operations; that's why people reach for mprime as a heavy-duty CPU stress test, instead of just running integer code. MPrime's stress test itself is a series of different tests, and it will probably not surprise you to hear that which specific tests seemed to use the most power varied between my AMD Ryzen 1800X machine and my Intel i7-8700K machine. I don't know enough about MPrime's operation to know if the specific tests differ in what CPU operations they use or only in how much memory they use and how they stride through memory.

(One of the interesting differences was that on my i7-8700k, the test series that was said to use the most power seemed to use less power than the 'maximum heat and FPU stress' tests. But it's hard to say too much about this, since power usage could swing drastically from sub-test to sub-test. I saw swings of 20 to 30 watts from sub-test to sub-test, which does make reporting a 'mprime power consumption' number a bit misleading.)

Trying to monitor the specific power usage of MPrime sub-tests is about where I decided both that I'd run out of patience and that the specific details were unlikely to be interesting. It's clear that what uses more or less power varies significantly between the Ryzen 1800X system and the i7-8700K system, and really that's all I need to know. I suspect that it basically varies between every CPU micro-architecture, although I wouldn't be surprised if each company's CPUs are broadly similar to each other (on the basis that the micro-architectures and the design priorities are probably often similar to each other).

PS: Since I was measuring system power usage, it's possible that some of this comes from the BIOS deciding to vary CPU and system fan speeds, with faster fan speeds causing more power consumption. But I suspect that fan speed differences don't account for all of the power draw difference.

tech/VaryingCPUPowerDraws written at 01:13:11; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.