Wandering Thoughts archives

2019-04-22

Go 2 Generics: The usefulness of requiring minimal contracts

I was recently reading Ole Bulbuk's Why Go Contracts Are A Bad Idea In The Light Of A Changing Go Community (via). One of the things that it suggests is that people will write generics contracts in a very brute force fashion by copying and pasting as much of their existing function bodies into the contract's body as possible (the article's author provides some examples of how people might do this). As much as the idea makes me cringe, I have to admit that I can see how and why it might happen; as Ole Bulbuk notes, it's the easiest way for a pragmatic programmer to work. However, I believe that it's possible to avoid this, and to do so in a way that is beneficial to Go and Go programmers in general. To do so, we will need both a carrot and a stick.

The carrot is a program similar to gofmt which rewrites contracts into the accepted canonical minimal form; possibly it should even be part of what 'gofmt -s' does in a Go 2 with generics. Since contracts are so flexible and thus so variable, I feel that rewriting them into a canonical form is generally useful for much the same reasons that gofmt is useful. You don't have to use the canonical form of a contract, but contracts in canonical form will likely be easier to read (if only because everyone will be familiar with it) and easier to compare with each other. Such rewriting is a bit more extreme than gofmt does, since we are going from syntax to semantics and then back to a canonical syntax for the semantics, but I believe it's likely to be possible.

(I think it would be a significant danger sign for contracts if this is not possible or if the community strongly disagrees about what the canonical form for a particular type restriction should be. If we cannot write and accept a gofmt for contracts, something is wrong.)

The stick is that Go 2 should make it a compile time error to include statements in a contract that are not syntactically necessary and that do not add any additional restriction to what types the contract will accept. If you throw in restrict-nothing statements that are copied from a function body and insist that they stay, your contract does not compile. If you want your contract to compile, you run the contract minimizer program and it fixes the problem for you by taking them out. I feel that this is in the same spirit as requiring all imports to be used (and then providing goimports). In general, future people, including your future self, should not have wonder if some statement in a contract was intended to create some type restriction but accidentally didn't, and you didn't notice because your current implementation of the generic code didn't actually require it. Things in contracts should either be meaningful or not present at all.

To be clear here, this is not the same as a contract element that is not used in the current implementation. Those always should be legal, because you always should be able to write a contract that is more strict and more limited than you actually need today. Such a more restrictive contract is like a limited Go interface; it preserves your flexibility to change things later. This is purely about an element of the contract that does not add some extra constraint on the types that the contract accepts.

(You can pretty much always relax the restrictions of an existing contract without breaking API compatibility, because the new looser version will still accept all of the types it used to. Tightening the restrictions is not necessarily API compatible, because the new, more restricted contract may not accept some existing types that people are currently using it with.)

PS: I believe that there should be a gofmt for contracts even if their eventual form is less clever than the first draft proposal, unless the eventual form of contracts is so restricted that there is already only one way to express any particular type restriction.

programming/Go2RequireMinimalContracts written at 22:13:08; Add Comment

You might as well get an x86 CPU now, despite Meltdown and its friends

A year or so ago I wrote an entry about how Meltdown and Spectre had made it a bad time to get a new x86 CPU, because current CPUs would suffer from expensive mitigations for them and future ones wouldn't. Then I went and bought a new home CPU and machine anyway, and as time has passed I've become more and more convinced that I made the right decision. Now I don't think that people should delay getting new x86 CPUs (or any CPUs), at least not unless you're prepared to wait quite a long time.

Put simply, speculative execution attacks have turned out to be worse than at least I expected back in the days when Meltdown and Spectre were new. New attacks and attack variations keep getting published and it's not clear that people have any idea how to effectively re-design CPUs to close even the current issues, never mind new ones that researchers keep coming up with. That mythical future CPU that will mitigate most everything with significantly less performance penalty is probably years in the future at this point. I'd expect it to take at least one CPU design cycle after people seem to have stopped discovering new speculative execution attacks, and it might be longer than that (it may take CPU designers some time to work out good mitigations, for example).

So yes, any current x86 CPU you buy will pay a performance penalty to deal with speculative execution problems (assuming that you don't turn the mitigations partially or completely off). But so will future ones, although they'll probably pay a lower penalty. Effectively, new CPUs with improved hardware-based mitigations against speculative execution are now one more source of the modest but steady progress in CPU performance. Like a number of other sources of performance improvements (such as additional special SIMD instructions), the improvements will matter a lot to some people and not very much to others. For desktop and general use, they'll probably be useful but not critical.

(It's even possible that future CPUs will see effective decreases in some aspects of performance. For example, Intel dropped HyperThreading in recent generations of i7 CPUs at the same time as they increased the core count. I don't believe Intel has explicitly linked this to speculative execution issues, but certainly HT makes some of them worse, so dropping HT is an easy mitigation that can also be used to drive sales of higher end CPUs in Intel's usual fashion.)

PS: I'm not even going to guess at the benefits and risks of turning various mitigations off in various cases, especially for desktop use, because it depends on so many factors. Right now I'm going with the Linux and Fedora defaults, because that's the easiest way and I have fast enough CPUs and light enough usage that it hopefully doesn't matter a lot to me (but of course I haven't measured that).

tech/MeltdownMightAsWellBuy written at 00:17:23; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.