Wandering Thoughts

2021-10-26

Go 1.18 will embed source version information into binaries

Since Go 1.13, Go has embedded information about the modules used to build a program into its binary, and has reported them if you used 'go version -m ...' (release notes). This information has historically included a variety of things, including the path and module that the program was built from, an indication of any module replacements being used, and a marker that the program was being built from a source tree instead of with 'go install ...@...'. However, it hasn't historically included any information about the state of the source tree when you built from source.

In Go 1.18, this is changing due to issue 37475 and issue 35667. To quote from the draft release notes:

The go command now embeds version control information in binaries including the currently checked-out revision and a flag indicating whether edited or untracked files are present. Version control information is embedded if the go command is invoked in a directory within a Git or Mercurial repository, and the main package and its containing main module are in the same repository. This information may be omitted using the flag -buildvcs=false.

Additionally, the go command embeds information about the build including build and tool tags (set with -tags), compiler, assembler, and linker flags (like -gcflags), whether cgo was enabled, and if it was, the values of the cgo environment variables (like CGO_CFLAGS). This information may be omitted using the flag -buildinfo=false. [...]

(I don't know if the module and other information that will be embedded also includes information about the Go workspace you're in, if you're in one.)

At the moment this information is recorded in Go binaries in a form that can be read by Go 1.17's 'go version -m', and probably this also works for earlier versions (as I believe much of this information is actually in the binary as text).

The current information is pretty verbose by default since cgo is generally enabled, and this adds five lines of output, even if the code being compiled didn't actually use it and you have no special environment variables set. I suspect that this won't get reduced before release; the Go team seems to like being comprehensive here.

If nothing else, having this information embedded into binaries will make it harder to lose track of what exactly you're running. Now you have a better chance of figuring out the provenance of some old Go binary that's been running in the corner for who knows how long. People who distribute binaries widely and build them doing funny things may now have more of those funny things be somewhat visible, though. Well, until they turn these features off with those documented flags.

(As threatened, Go 1.18 will only support building programs in module mode, so this information only coming from 'go version -m' doesn't matter any more.)

Sidebar: What this extra information looks like today

Here is the additional output added by this, as reported by 'go version -m' from the current development version:

       build   compiler        gc
       build   tags    goexperiment.regabiwrappers,goexperiment.regabireflect,goexperiment.regabiargs
       build   CGO_ENABLED     true
       build   CGO_CPPFLAGS
       build   CGO_CFLAGS
       build   CGO_CXXFLAGS
       build   CGO_LDFLAGS
       build   gitrevision     4f8a1b5f197fc69bc1252b32b5a8ed670ff557b6
       build   gituncommitted  false

I wouldn't be surprised if some of the current build tags go away in the released version of Go 1.18, as some experiments are promoted out of that state.

Documentation on the current experiments is in the source code in internal/goexperiment/flags.go, and can be gotten for your particular version of Go with 'go doc goexperiment.Flags'. I'm not sure how you can find out the default set of experiments enabled in your version of Go, but in the source code this seems to be in the 'baseline' variable set in ParseGOEXPERIMENT() function in internal/buildcfg/exp.go. The current Go 1.18 build tags reported above appear to be this baseline state on amd64.

GoVersionOfYourSource written at 00:09:30; Add Comment

2021-10-20

In the beginning, there was no way to expand C's stack size

There are undoubtedly many reasons that C doesn't have any language or API concept of your stack size. But I think that one factor in this is that in C's original world, that of Unix on the PDP-11, running out of stack size was essentially fatal no matter what you did. The fundamental problem was that there was no way to actually expand the space available for your stack.

Normal programs on the PDP-11 had a 16-bit address space, giving them at most 64 Kb of address space for all of their data, including the stack. Unix on the PDP-11 adopted a simple memory model for this address space; your data went at the bottom followed by the start of the heap, your stack went at the top, and then your heap and your stack grew toward each other without limit (V7 Unix didn't have any kernel limits on your stack size). If you ran out of stack space, it meant that between your stack and your heap you'd used all 64 Kb of available address space. You couldn't reallocate a bigger stack or move memory around or anything; you were plain out of address space. The only way of solving this was to have a smaller heap or a smaller stack.

C on V7 Unix on the PDP-11 was a relatively minimal language used for a minimal operating system on a comparatively slow and limited machine. A careful MIT approach system might still have stack size probes on function entry so that it could give better diagnostics in the unlikely event that you ran out of stack space. C on V7 did not, and this propagated through into the C culture as part of C as a 'portable assembly', one that did very little more than what you explicitly programmed yourself.

(In V7, you did get an error return if you ran out of heap space by trying to grow the heap into the stack (cf). This was possible because growing the heap required an explicit system call, while the stack grew implicitly through usage.)

Sidebar: The general cultural issue

I am not Hillel Wayne so I haven't researched this, but my impression is that very few 1960s and 1970s languages of any sort had any explicit API for the stack and stack sizes, and this was especially the case for languages intended for low level programming. Even today there are generally few languages with any sort of explicit API for things like 'how much stack space am I using', 'what is my stack space limit', and especially 'how much stack space will calling this thing use'. The stack is much more magic than the heap and most everyone continues to pretend that it will always just work.

(C was not the only low level language from the 1960s and 70s, merely the only one that thrived and is well known today. See, for example, BLISS.)

CStackOnceNoExpansion written at 22:22:48; Add Comment

2021-09-27

Stack size is invisible in C and the effects on "portability"

Somewhat recently I read Ariadne Conill's understanding thread stack sizes and how alpine is different (via), which is in part about how Alpine Linux has a very low default thread stack size, unlike other things, and this can cause program crashes. As part of this, Conill says:

In general, it is my opinion that if your program is crashing on Alpine, it is because your program is dependent on behavior that is not guaranteed to actually exist, which means your program is not actually portable. When it comes to this kind of dependency, the typical issue has to deal with the thread stack size limit.

Conill also sort of calls out glibc-based Linux for having by far the largest default thread stack size at 8 MiB, and says:

[...] This leads to crashes in code which assumes a full 8MiB is available for each thread to use.

The practical problem with this view is that stack size is invisible in C, and especially it's not part of the portable C API and generally not part of either the platform API or ABI. Unlike malloc(), which can at least officially fail, the stack is magic; your code can neither guard against hitting its size limit nor check its limits in any portable way. Nor can you portably measure how much stack size you're using or determine how much stack size it may require to call library functions (this is part of how the C library API is under-specified).

If a limitation exists but its exact parameters are invisible to you, running into it (and crashing) doesn't make your program "not actually portable" in any pejorative sense, it makes it unfortunate. That your program doesn't run in some limited environments is perhaps not ideal but it is not particularly your fault.

Also, since there is no (reasonable) way to test or mitigate stack size issues in C, all that both programmers and library implementers can reasonably do is operate by superstition and supposition. In light of this, glibc's decision to use a large default thread stack size is entirely reasonable; it's pretty much the safest choice, especially since glibc makes it the same as the usual default program stack size. Attempting to limit stack space usage without the tools to measure it is probably not as dangerous as trying to optimize your code without doing performance testing, but it's probably not going to yield really good results either.

Some people would like C programmers to be efficient (ie limited) in their use of stack space. Apart from anything else I might feel about this, I will say that it's important for people to be able to measure and monitor anything that you want them to be efficient with. If you want me to minimize my code's power usage but don't provide me with tools to measure that, you aren't likely to get much in practice (and what I do without measurement may make it worse).

(Technically speaking it's possible to assess and measure stack size usage of C code if you try hard enough. For example, you can have a great test suite and conduct binary searches to determine at what thread stack size your code starts to crash under test. Program analysis techniques may also be tempting, but remember that your platform C library probably doesn't have any specific stack usage promises.)

CStackSizeInvisible written at 21:50:10; Add Comment

2021-09-24

Go generics have a new "type sets" way of doing type constraints

Any form of generics needs some way to constrain what types can be used with your generic functions (or generic types with methods), so that you can do useful things with them. The Go team's initial version of their generics proposal famously had a complicated method for this called "contracts", which looked like function bodies with some expressions in them. I (and other people) thought that this was rather too clever. After a lot of feedback, the Go team's revised second and third proposal took a more boring approach; the final design that was proposed and accepted used a version of Go interfaces for this purpose.

Using standard Go interfaces for type constraints has one limitation; because they only define methods, a standard interface can't express important constraints like 'the type must allow me to use < on its values' (or, in general, any operator). In order to deal with this, the "type parameters" proposal that was accepted allowed an addition to standard interfaces. Quoting from the issue's summary:

  • Interface types used as type constraints can have a list of predeclared types; only type arguments that match one of those types satisfy the constraint.
  • Generic functions may only use operations permitted by their type constraints.

Recently this changed to a new, more general, and more complicated approach that goes by the name of "type sets" (see also, and also). The proposal contains a summary of the new state of affairs, which I will quote (from the overview):

  • Interface types used as type constraints can embed additional elements to restrict the set of type arguments that satisfy the constraint:
    • an arbitrary type T restricts to that type
    • an approximation element ~T restricts to all types whose underlying type is T
    • a union element T1 | T2 | ... restricts to any of the listed elements
  • Generic functions may only use operations supported by all the types permitted by the constraint.

Unlike before, these embedded types don't have to be predeclared ones and may be composite types such as maps or structs, although somewhat complicated rules apply.

Type sets are more general and less hard coded than the initial version, so I can see why the generics design has switched over to them. But they're also more complicated (and more verbose), and I worry that they contain a little trap that's ready to bite people in the foot. The problem is that I think you'll almost always want to use an approximation element, ~T, but the arbitrary type element T is the default. If you just list off some types, your generics are limited to exactly those types; you have to remember to add the '~' and then use the underlying type.

My personal view is that using type declarations for predeclared types is a great Go feature, because it leads to greater type safety. I may be using an int for something, but if it's a lexer token or the state of a SMTP connection or the like, I want to make it its own type to save me from mistakes, even if I never define any methods for it. However, if using my own types starts making it harder to use people's generics implementations (because they've forgotten that '~'), I'm being pushed away from it.

Some of the mistakes of leaving out the '~' will be found early, and I think adding it wouldn't create API problems for existing users, so this may not be a big issue in practice. But I wish that the defaults were the other way around, so that you had to go out of your way to restrict generics to specifically those types with no derived types allowed.

(If you just list some types without using a union element you've most likely just created an unsatisfiable generic type with an empty type set. However you're likely to notice this right away, since presumably you're going to try to use your generics, if only in tests.)

GoGenericsTypeSets written at 00:27:07; Add Comment

2021-09-02

Go multi-module workspace mode, a forthcoming feature in Go 1.18

I watch commits to the Go development repository for various reasons, and some times I see interesting changes fly by. One recent one was a merger of a long-running branch called 'dev.cmdgo', which made me curious what this branch was for. It turns out it was for a new feature called multi-module workspaces, which has a proposal document and an open issue with discussions. Although nothing is sure until Go 1.18 is released, it's very likely that this feature will be in 1.18.

The abstract of the proposal document explains the basic idea fairly well:

This proposal describes a new workspace mode in the go command for editing multiple modules. The presence of a go.work file in the working directory or a containing directory will put the go command into workspace mode. The go.work file specifies a set of local modules that comprise a workspace. When invoked in workspace mode, the go command will always select these modules and a consistent set of dependencies.

At the moment, two important things can go in go.work files. The first is Go module replace directives, which apply to all modules in the workspace and override any go.mod replace directives already present. The second is a list of directories of Go modules; these are the modules that are part of the workspace. When a Go file in the workspace imports a package from another module in the workspace, it always comes from whatever is in the directory tree, regardless of what version is nominally being asked for in a go.mod.

(As noted in the proposal, this allows you to create a GOPATH-like setup, among other things.)

If you're going to get much use out of your Go workspace, at least one module needs to be in the directory tree under your go.work, but as the proposal covers, you don't have to put all directories there. You can point to other modules anywhere in the filesystem, enabling various things. However, normally many or all of your listed directories will be in the directory tree with your go.work in it.

The proposal document covers some official usage cases for this feature (in the Rationale section), such as making changes to several separate modules at once. Previously this might require a cascade of replace directives in go.mod files, but now it can be done centrally in one place and say what you mean. I can also already see other uses, such as creating a self-contained build area for your Go programs when you have different programs in different modules.

(With a single module you can vendor all your external dependencies, but with several programs in several separate modules you'd normally have to vendor repeatedly, and then worry about keeping them all consistent.)

(At some point people will write better articles about all of this, with practical examples, but since I found and read the proposal out of curiosity, I'm writing down this bare bones version now.)

GoWorkspacesComing written at 23:19:27; Add Comment

2021-08-31

Go doesn't have a stack the way that some other languages do

Go often looks like a relatively low level language, both through its syntax similarity to C and because it has things like explicit pointers. However in some ways this appearance can be deceptive and intuitions from languages like C can be incorrect (although not usually dangerous). One of those intuitions is about the role of the stack. This is because Go (the language) doesn't really have a stack (you can search the specification to see this; there's no mention of one). Go implementations will usually have a stack, but it's an implementation detail and some things don't behave like you might expect. In particular, in Go, local variables in a function aren't necessarily allocated on the stack.

Go certainly tries to allocate things on the stack when it can, including local variables, but this is because it's fast to allocate and deallocate stack space, not because of any semantic need. Local variables in Go just have to follow the scoping rules, at least as far as their actual use goes. Whether any particular local variable is actually allocated on the stack or is allocated on the heap depends on what your Go compiler can determine it needs to do in order to give you proper semantic behavior. This has changed over time, especially as the main Go compiler has gotten smarter and smarter about escape analysis, to the point where sometimes it has to be explicitly defeated.

(In theory there might be no semantic obstacle to stack allocating even a package level variable under suitable circumstances. In practice I think the Go compiler is unlikely to ever do that, partly because it makes things too confusing for debugging.)

This may go both ways. At least in theory, Go could choose to heap allocate sufficiently large local variables even if they don't escape, or do something extra special to allocate them so that they were outside of the stack but got released when their scope and lifetime was up. I don't know if the current Go compiler (well, compilers) have such a limit, or if they will allocate even gigantic things on a function's stack (which is growable and so theoretically has all the space it needs).

In general, taking the address of a local variable means that the local variable will have to be preserved as long as the pointer is possibly valid. Often this means that the local address will be heap allocated (and so every call to the function will create a local variable with a different address). However, Go's escape analysis is sufficiently sophisticated that merely taking the address of a local variable doesn't force it to be heap allocated; it depends on what you do with it.

As a demonstration of this, consider the following two functions:

func demo1(a int) {
  i := a
  ip := &i
  fmt.Println("demo1", a, "addr", uintptr(unsafe.Pointer(ip)))
}

func demo2(a int) *int {
  i := a
  ip := &i
  fmt.Println("demo2", a, "addr", uintptr(unsafe.Pointer(ip)))
  return ip
}

If you try these out, for example in the Go playground, you'll find that demo1() prints the same address on every call while demo2() currently prints a different one for consecutive calls even if you throw away its return value. However (and again currently), if you pass ip directly to fmt.Println(), even demo1() prints different addresses on consecutive calls.

(I believe that ip doesn't escape from the fmt.Println() call stack, but the Go compiler currently won't determine that aod so considers ip to escape, which means i must be heap allocated and then later garbage collected. Go does see through some use of pointers in called functions, as shown by the efforts Go has to go through to defeat escape analysis.)

GoStackIsADetail written at 21:23:42; Add Comment

2021-08-29

In Go, pointers (mostly) don't go with slices in practice

When I wrote about why it matters that map values are unaddressable in Go, there were a set of Twitter replies from Sean Barrett:

Knowing none of the details & not being a go programmer, I would have guessed that map values aren't addressable because they're in a dynamically-sized hash table so they need to get relocated behind the user's back; getting the address of a value slot would break that.

But I'd also have assumed Go has dynamically-extensible arrays, and the same argument would apply in that case, so maybe not?

This sparked an article about how Go maps store their values and keys, so today I'm writing about the second part of Barrett's reply, about "dynamically-extensible arrays", because the situation here in Go is peculiar (especially from the perspective of a C or C++ programmer trying to extend their intuitions to Go). Put simply, Go has pointers and it has something like dynamically extensible arrays, but in practice you can't use pointers to slices or slice elements. Trying to combine the two is a recipe for pain, confusion, and weird problems.

On the surface, things look straightforward. The Go version of dynamically extensible arrays are slices. Slices and elements of slices are among Go's addressable values, so both of the following pointer creations are legal:

var s []int
s = append(s, 10, 20, 30)
// pointer to a slice element
//  and the slice
pe := &s[0]
ps := &s

At this moment you can dereference pe and ps and get the results you expect, including if you modify s[0] with eg 's[0] = 100'. Where things go off the rails is if you do anything else with the slice s, such as:

s = append(s, 50)
// return the slice from a function
return s

There are two problems. The first problem, possibly exposed by the append(), is that slice elements actually live in an anonymous backing array. Modifying the size of a slice (such as by appending another element to it) may create a new version of this anonymous backing array, and when the array is reallocated, any pointers to the old one aren't updated to point to the new one and so won't see any changes to it. So if you have the following code:

pe = &s[0]
s = append(s, 50)
s[0] = 100

The value of '*pe' may or may not now be 100, depending on whether the append() created a new version of the backing array.

The second problem is that slices themselves are passed, returned, and copied by value, which doesn't quite do what you might think because slices are lightweight things. A slice is a length, a reference to the anonymous backing array, and a capacity. Copying the slice copies these three, but doesn't copy the anonymous backing array itself, which means that many slices can refer to the same anonymous backing array (and yes this can get confusing and create fun problems).

When you take a pointer to a slice, you get a pointer to the current version of this tuple of information for the slice. This pointer may or may not refer to a slice that anyone else is using; for instance:

ps := &s
s = append(s, 50)

At this point, '*ps' may or may not be the same thing as 's', and so it might or might not have the new '50' element at the end. The more time that passes between taking a pointer to a slice and the slice being further manipulated, the less likely it is that 'ps' points to anything useful. If the slice 's' is returned from a function the return is copy by value, and so ps definitely no longer points to the live slice that the caller is using, although ps might have the same length and refer to the same anonymous backing array.

Update: It's been pointed out that this isn't true in the limited example here. In Go, variables like s are storage locations, so although the append() may return a different slice value, this different value will overwrite the old one in s and the ps pointer will still point to the current version of the slice. However, this isn't the case if the append() happens in a different function (after you either return s or pass it to a function as an argument).

This leads to the situation mentioned on Twitter by Tom Cheng:

> regular GC keeps pointers to the old version alive if necessary.

Wait... what? so if i get a pointer into an array, then resize the array, then get a pointer to the same index, i'll have 2 valid pointers to 2 completely different objects??

(For 'array', read 'dynamically extensible array', so a slice. The answer is yes.)

It's possible to use pointers to slices or to slice elements in limited situations, if you're very careful with what you're doing with them (or know exactly what you're doing and why). But in general, pointers to slices and slice elements don't do what you want.

Honestly, this is a strange and peculiar situation, although Go programmers have acclimatized to it. To programmers from other languages, such as C or C++, the concept of pointers to dynamically extensible arrays seems like a perfectly decent idea that surely should exist and work in Go. Well, it exists, and it "works" in the sense that it yields results and doesn't crash your program, but it doesn't "work" in the sense of doing what you'd actually want.

GoSlicesVsPointers written at 00:42:58; Add Comment

2021-08-19

Configuration (and configuration files) is not and cannot be generic

In a comment on my entry on (not) using YAML for human-written configuration files, Wolfgang Richter asked what I thought of Dhall as a standard for configuration. I'm afraid that I have to rain on everyone's parade: I don't believe that any generic and general language can be used for configuration files for programs that need complex configuration (setting aside some problems with using languages, like reasoning about the result and the general lack of clarity).

Programs with complex configuration needs are almost always expressing some sort of logic about some domain (and sometimes more than one), whether this is conditional logic or description of transformations or something else. Both the logic of what can be expressed and done and the terms and elements of the domain are specific and custom to the program. This is what you see in firewall rules, whether OpenBSD PF or Linux's various generations of systems, in Exim (in SMTP ACLs, routers, and other elements), in Prometheus for recording and alert rules, label rewriting, and more, and even in Apache once you start using its full capabilities.

Pretty much by definition, a generic, general language doesn't have either the specific logic and restrictions or the specific terms and elements of the program's domain. As such you cannot express this complex configuration directly in the language. Either you embed strings or data structures representing this logic in your general language (the YAML approach) or you use the language to verbosely fake the logic and terms with things like (apparent) subroutine calls (the configuring in a real language approach). In both cases the result lacks readability, clarity, and often error checking. You clearly get the most readable and clear configuration file from a (good) custom configuration language that lets you directly express the program's logic about its domain.

(With that said, it's quite possible to create custom configuration languages that aren't very good at this. Language design is a skill and configuration files are far too often designed for the program to consume more than for people to read.)

Programmers love generality, and to some degree don't want to do language design, so I understand the eternal appeal of some universal language for program configurations. But no universal language like Dhall can be a genuinely good configuration language for a program with complex configuration needs.

(People who feel that their general configuration language of choice can be this are invited to write a moderately complicated Apache virtual host configuration (with several sets of permissions and proxying for different URLs and sub-URLs, and don't forget Let's Encrypt's HTTP authentication) or a set of OpenBSD PF rules in their language, and see how it comes out. It would make for some interesting blog posts for anyone so inclined.)

ConfigurationIsNotGeneric written at 01:36:08; Add Comment

2021-08-13

Some of my views on using YAML for human-written configuration files

Over on Twitter, I said something:

Hot take: YAML isn't a configuration language or a configuration language format, it's a serialization format. Is de-serializing some data structures the best way to configure a program? Maybe not. (Probably not. Mostly not.)

Like programming languages, all configuration systems communicate with both the computer and other people. But most are designed only for the computer to consume, not to be clear when people read it. De-serializing your live data structures is an extreme example of this.

(I've said the second bit before and I'm sure I'll say it again. See also.)

There are some configurations that are simple enough that I think YAML works okay; I'd say that these are pretty much ones that have sections with 'key = value' settings (but there are simpler, more readable formats for this, like TOML). Once you go beyond that to having your configuration in more complicated data structures, you start to have issues. Of course you can de-serialize to initial data formats that are then further interpreted by your program to create your actual configuration, but then you have an additional problem:

What YAML does is provide a straightforward format for simple data. It's mostly used to deserialize into some data structures of yours. YAML is opaque and relatively hostile to any structure beyond that; you get to embed it in YAML strings and structural relationships.

There are plenty of programs with complex configuration needs. If you use YAML for a program like this, you get at least one of two bad results; either you're using YAML to transport strings that will really be interpreted much more deeply later by the program, or you have to attempt to program your program through YAML structural relationships between magic keys, like Prometheus label rewrite rules.

As a string transport mechanism, YAML does mean you don't have to write a file level parser (but you're still going to be parsing your strings). But you pay a high price for that, especially in typical environments with bad YAML error reporting and bad YAML passthrough of things like line numbers, and file level parsers are not particularly difficult to write. And in the name of avoiding writing a decent file level parser, you're sticking people who have to deal with your configuration file with problems like YAML's whitespace issues, YAML's general complexity, and the general issue that editing strings embedded in YAML is generally not a particularly great experience.

If you attempt to configure some things through structural relationships between (YAML) elements, congratulations, you've just created a custom configuration language that is really terrible and verbose, and probably has bad error reporting if people make mistakes (or no error reporting at all). People did this before in XML and it wasn't any better then.

Using a good custom designed configuration file format instead of trying to shove things through the narrow pipe of YAML means that you have one integrated syntax that can be designed to be more readable, more expressive, and much easier to write. It will probably be easier to provide good error messages about problems (both syntax and semantics), ones that point directly to the line and say specifically what the problem is.

PS: If you have a complex configuration, there's no way to get out of writing some sort of parser unless you go to the extreme of making people hand-write your AST in YAML elements. Either you have to parse those embedded strings (where much of the complexity is) or you have to interpret and validate the combination of YAML fields and structures, or both.

(Forcing people to hand-write ASTs for you is such a terrible idea that I hope no program actually does this.)

YAMLAndConfigurationFiles written at 17:22:11; Add Comment

Go keeps surprising me with its careful design and specification

When I started writing my entry on why it matters that map values are unaddressable in Go, I expected to end it with a remark to the effect that I didn't understand why the Go authors had put this restriction in the specification but they probably had good reasons. But by the time I finished writing the entry, I had realized the language semantics problem of allowing 'm["nosuchkey"]' to be addressable. Then later when I looked up how Go maps store their values (and keys) I saw how allowing you to take the address of a map value probably wouldn't do what you wanted in natural Go.

I've had this experience more than once, where I've been surprised by how quietly careful Go's design and specification is. There are various technical areas of the Go specification that have had what seemed like arcane restrictions or rules, but when I've thought more deeply about them I've come up with reasonably good reasons for the rules to exist.

(Sometimes these are small ones, like how arbitrary precision constants affect cross compilation. Even things like always requiring delimited if blocks have reasons.)

On the one hand, this shouldn't be surprising in general. The designers of Go were quite experienced, knew what they were doing, and spent a fair amount of time working on it. Given that, it's very likely that everything in the Go specification was carefully considered and has a solid reason behind it, even if it's not immediately obvious.

On the other hand, this is not necessarily the usual experience with languages, especially languages that haven't gone through a formal (and somewhat adversarial) specification process. Solid language specifications are genuinely hard to create and you don't see them very often.

PS: This isn't to say that Go's design and specification is flawless, even apart from features it simply doesn't have. I haven't gone looking for flaws, but they probably exist and people have probably written about them.

GoCarefulDesign written at 00:01:21; Add Comment

(Previous 10 or go back to August 2021 at 2021/08/02)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.