Wandering Thoughts archives


Our never-used system for user-provided NFS accessible storage

Reading most of my entries here on Wandering Thoughts, you might get the impression that all of the projects we do are a good idea and successful. This is not in fact the case. Instead, it's selection bias in what I write about, partly because it's often not very interesting to write about things we decided we couldn't implement, or that we implemented and then they never went anywhere until we quietly decommissioned them. But today I have a good lead in to talk about one particular quiet failure, especially since it also showcases a sysadmin approach to dealing with new problems by reducing them to previously solved ones.

We allow professors to purchase reliable NFS storage, most usually backed up, in the form of space on our fileservers. However, this space is significantly more expensive than just buying raw disks, even if you forgo permanently backups in exchange for a discount. People are perennially unhappy about this, for natural reasons, and every so often we try to do something about it. The general form our attempted solutions have taken is a model where you pay for the physical disks and we put them in a server that we operate for a bit of an extra fee. You buy however many disks you want, you specify the redundancy level you want given the disks, and your storage lasts as long as your disks do, or at least as long as their warranty does. One of the things that people have traditionally wanted to do with this user-provided storage is NFS export it to at least their own machines, which leaves us with the problem of operating a NFS server (or several) built on top of people's random disks.

In late 2014, we went through an iteration of seeing this need (again) and trying to come up with a design and architecture that worked for us, one that we felt that we could administer and operate with reasonable confidence. This was just after we were deploying our second generation of ZFS NFS fileservers, where OmniOS frontends did the NFS and ZFS but talked to their disks over iSCSI, with the physical disks living on Linux backends. In a triumph of brute force design, we decided that we would reduce our user-provided NFS server problem to the already solved problem of doing NFS fileservice with OmniOS and iSCSI backends.

Of course the user-provided NFS storage would not use the full scale setup of our OmniOS fileservers; instead it would use a much simpler brute force version. Each 'fileserver' would be an OmniOS frontend (running on a Dell 1U server instead of our regular OmniOS fileserver hardware) that was directly connected (with a single network cable) to a single Linux iSCSI backend that would hold all of the user-provided disks. This gave us a setup that looked and operated like the OmniOS NFS fileserver environment we already had confidence in, at relatively low hardware cost. By reducing things this way, we didn't have to worry about NFS service on Linux or putting lots of disks on OmniOS, and in theory everything would be great.

We definitely built a single OmniOS machine to be the initial NFS frontend. I'm not sure we ever built an iSCSI backend for it, because in practice we never went anywhere with actually selling this idea to professors and having them buy disks for it. Instead, a few years later (in 2016), we quietly decommissioned the single OmniOS frontend we'd built. The last lingering relic of this entire cycle of design, build, and decommissioning was a third iSCSI network we noticed recently.

(I believe the plan was that all NFS frontends and all iSCSI backends for this project would have used the same 'iscsi3' network, even though they weren't all networked together and so in some sense each pair should have had its own network. Probably we would have still used unique IP addresses, just in case.)

sysadmin/UserProvidedNFSStorageFail written at 23:39:12; Add Comment

In Go 1.18, generics are implemented through code specialization

The news of the time interval is that Go 1.18 Beta 1 is available, with generics. Barring something terrible being discovered about either the design or the current implementation of Go generics, this means that they will be part of Go 1.18 when it's released in a few months. Further, because we're already in the beta phase, it's highly unlikely that how generics are implemented will change substantial before Go 1.18.

(The Go developers generally won't want to do a major code change between beta and release, so if there turn out to be significant enough problems with the current implementation, it's more likely that generics will be pulled from Go 1.18 entirely, even at this late stage.)

One of the practical questions around generics in Go has been how they would be implemented. The generics design proposal was careful to not close out any options. To quote from its short section on implementation:

We believe that this design permits different implementation choices. Code may be compiled separately for each set of type arguments, or it may be compiled as though each type argument is handled similarly to an interface type with method calls, or there may be some combination of the two.

The first option listed, generating and compiling code separately for each set of type arguments, is generally called specialization or monomorphization. The second option would likely be a combination of how interfaces are implemented and how the Go runtime implements maps efficiently without generics, depending on what operations you were doing in the generic code.

With the release of Go 1.18 beta, we have a pretty firm answer. At least in the initial implementation in Go 1.18, generics in Go will be implemented with specialization. When you use generics in Go, the Go compiler effectively creates a non-generic version of the code that has the specific types you're using, and compiles that. If you use generic code with two or three or four difference types (or sets of types), you get two or three or four different versions of the code.

Update: This is partially wrong. Generics aren't necessarily specialized in practice, and the implementation design was not based around specialization. The current implementation in Go 1.18 is described in the design proposal Generics implementation - GC Shape Stenciling; see the comments for more detail.

One way to see this for yourself is with the use of the godbolt.org compiler explorer, which has the latest Go development version as one of its compiler options. That lets you write generics examples like this silly example and see what they compile to. This example has a very simple, silly generic function and some uses of it:

func Compit[T comparable](a, b T) bool {
    return a == b

func CompInt(a, b int) bool {
    return Compit(a, b)

func CompFloat(a, b float64) bool {
    return Compit(a, b)

func CompString(a, b string) bool {
    return Compit(a, b)

These compile to three different sets of amd64 assembly code, each using the relevant CPU instructions (and for strings, runtime functions) to compare the different sorts of values. In addition, CompInt and CompFloat actually receive their arguments in different registers, because Go's amd64 register based calling convention puts integer arguments in different registers than floating point ones. This is a common decision in calling conventions; many C calling conventions also behave this way, which can lead to some interesting effects.

In theory, if you use the same generic code with different declared types that have the same underlying type, the Go compiler may not need to generate a new function for each declared type. In other words, suppose that you had:

type Tank int
func CompTank(a, b Tank) bool {
    return Compit(a, b)

In theory CompInt and CompTank could be the same function, not merely two functions with the same machine code. In practice I don't know if the Go compiler will ever do this, and there are a number of situations where it can't do this because the declared type is in some way used by the generic code. For example:

func Report[T any](s ...T) {
   for _, v := range s {
      fmt.Printf("%T %#v\n", v, v)

(Taken from the Go tip playground current example.)

If you invoke Report() on arguments of type Tank instead of just int, it had better be able to report their type correctly (and it does).

Update: In practice, I now believe these sorts of functions can be compiled in Go 1.18 to use common code, depending on what types are involved. See the implementation proposal and the comments for how this works.

PS: An interesting thing happens with inferred typing of generics and constants. Suppose that you have the following two lines:

Report(Tank(10), 20)
Report(30, Tank(40))

You cannot mix two different types in the same invocation of Report(), because it has only a single type for all of its arguments. So these two cases sort of look as invalid as 'Report( int(1), Tank(2) )'. But untyped constants are special in inferred typing; if it makes type inference work, they take on the type of the typed argument instead of their default of, say, int. So in both lines, both arguments are of type Tank.

PPS: You can appear to mix types in arguments to Report() with the trick:

Report[any]("abc", 10)

As we know from the draft release notes, the new predeclared identifier 'any' is an alias for 'interface{}'. Our syntax here is forcing a type T of interface{}, which of course anything can be converted to automatically, and then the real types show through in the end.

programming/Go18GenericsSpecialized written at 00:19:18; Add Comment

Page tools: See As Normal.
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.