Wandering Thoughts

2021-07-23

Why it matters that map values are unaddressable in Go

A while ago, I wrote Addressable values in Go (and unaddressable ones too) as an attempt to get straight this tricky concept in Go, which I hadn't fully understood. To refresh, the Go specification's core description of this is covered in Address operators:

For an operand x of type T, the address operation &x generates a pointer of type *T to x. The operand must be addressable, that is, either a variable, pointer indirection, or slice indexing operation; or a field selector of an addressable struct operand; or an array indexing operation of an addressable array. As an exception to the addressability requirement, x may also be a (possibly parenthesized) composite literal. [...]

One of the things that are explicitly not addressable are values in a map. As I mentioned in the original entry, the following is an error:

&m["key"]

On the surface this looks relatively unimportant. There aren't many situations where you might naturally explicitly take the address of a map value. But there turns out to be an important consequence of this, brought to my attention recently by this article.

One important thing in Go that addressability affects is Assignments:

Each left-hand side operand must be addressable, a map index expression, or (for = assignments only), the blank identifier. [...]

Suppose that you have map values that are structs with fields. Because map values are not addressable and field selectors can only be applied to addressable struct operands, you cannot directly assign values to the fields of map values. The following is an error:

m["key'].field = 10

This will give you the clear error of 'cannot assign to struct field m["key"].field in map'. To make this work, you must assign the map value to temporary variable, modify the temporary, and put it back in the map:

t := m["key"]
t.field = 10
m["key"] = t

One reason I can think of for this restriction is that otherwise, Go might be required to silently materialize struct values in maps as a consequence of what looks like a simple field assignment. Consider:

m["nosuchkey"].field = 10

If this was to work, it would have to have the side effect of creating an entire m["nosuchkey"] value and setting it in the map for the key. Instead Go refuses to allow it, at compile time.

In the usual way of addressable values in Go, this will work if the map values are pointers to structs and the syntax is exactly the same. This implies that in some cases you can convert internal map values from pointers to structs to the structs themselves without any code changes or errors, and in some cases you can't.

(However, with pointer map values the m["nosuchkey"].field case would be a runtime panic. When you deal with explicit pointers, Go makes you accept this possibility.)

This also affects method calls (and method values) in some situations, because of this special case:

[...] If x is addressable and &x's method set contains m, x.m() is shorthand for (&x).m(): [...]

If you have a type T and there is a pointer receiver method *T.Mp(), you can normally call .Mp() even on a non-pointer value:

var v T
v.Mp()

However, this requires that the value be addressable. Since map values are not addressable, the following is an error (when the type of map values is T):

m["key"].Mp()

Currently, you get two errors for this (reported on the same location):

cannot call pointer method on m["key"]
cannot take the address of m["key"]

This is the same error message as we saw for function return values in my original entry, just about a different thing. As before, converting the map value type from T to *T will make this not an error and all of the syntax is exactly the same.

As with the field access case, Go not allowing this means that it doesn't have to consider what to do if you write:

m["nosuchkey"].Mp()

While there are various plausible options for what could happen here if Go accepted it, I think the one that most people would expect is that it would work the same as:

t := m["nosuchkey"]
t.Mp()
m["nosuchkey"] = t

Which is to say, Go would have to materialize a value and then add it to the map. As a subtle issue, the working version makes it clear when m["nosuchkey"] actually exists. This also makes it explicit that the method call isn't manipulating the value that is in the map.

(My original entry was sparked by a Dave Cheney pop quiz involving the type of a function return, so I was thinking more about function return values than other sorts of values.)

PS: I think this lack of map value addressability means that there's no way today in Go to directly modify a map value or its fields. Instead you must copy the map value into a temporary, manipulate the temporary, and then put it back in the map. This is probably a feature.

GoAddressableValuesII written at 23:33:18; Add Comment

2021-07-19

Making a Go program build with Go modules can be not a small change

In theory, at some point in the future Go will stop supporting the traditional GOPATH mode. When this happens, if you want to still build old Go programs that you have sitting around in checked out version control repositories, you will need to modularize them. Once upon a time, I thought that this would be as simple as going to the root of your copy of the repo, then running 'go mod init ...' and 'go mod tidy'. Unfortunately, life is not this simple and there can be at least two complications.

The first complication is moved and renamed repositories for modules, if the moved module has a go.mod that declares its new name. For example what is now github.com/hexops/vecty was once github.com/gopherjs/vecty. In a non-modular Go build, you can still import it under the old path and it will work. However, the moment you attempt to modularize the program, 'go mod tidy' will complain and stop:

github.com/gopherjs/vecty: github.com/gopherjs/vecty@v0.6.0: parsing go.mod:
module declares its path as: github.com/hexops/vecty
        but was required as: github.com/gopherjs/vecty

In theory you may be able to get this to work with a go.mod replace directive. In practice my attempts to do this resulted in 'go mod tidy' errors about:

go: github.com/hexops/vecty@v0.6.0 used for two different module paths (github.com/gopherjs/vecty and github.com/hexops/vecty)

(You also need to get the version number or other version identifier of the moved repository.)

The general fix is to edit every import of packages from the module to use the new location. Then you can run 'go mod tidy' without it complaining.

The second complication is modules that have moved to versions above v1, possibly very far past v1; for example, github.com/google/go-github is up to v37, and modularized at v18 (it doesn't even have a tagged v1). A GOPATH build of the program you're trying to modularize will use whatever version of the repository you have checked out, which may well be the current one, and the code will import it as a version without a version suffix (as 'github.com/google/go-github'). When you run 'go mod tidy', Go will attempt to find the most recent tag (or version of the repository) that doesn't have a go.mod file, and specify that version in your go.mod with a '+incompatible' tag. Depending on how far Go had to rewind, this may be a version of the package that is far older than the program expects.

(If a go.mod existed for a v1 version, I suspect that 'go mod tidy' will pick that in this case. But I haven't tried to test it, partly for lack of a suitable module to test against. With github.com/google/go-github, I get 'v17.0.0+incompatible', the last tagged version before it was modularized.)

Again the fix is to edit the program's source code to change every import of the package to use the proper versioned package. Instead of importing, say, 'github.com/google/go-github/github', you would import 'github.com/google/go-github/v37/github'.

Although I haven't tested it extensively, it appears that go-imports-rename can be used to make both sorts of changes. I successfully used it to automatically modify my test third party repository.

(There may be other tools to do this package import renaming, but this is the one I could find.)

The unfortunate part of all of this is that it requires you to make changes to files that will be under version control in the repo. If the upstream updates things in the future, this will probably make your life more complicated.

(In some cases, 'go mod tidy' may insist that you clean up imports in code that's in sub-packages in the repository that aren't actually imported and used in the program itself.)

GoModularizationTwoGotchas written at 23:32:20; Add Comment

2021-06-24

Go 1.17 is deprecating the traditional use of 'go get'

A change recently landed in the development version of Go, and will be in Go 1.17, with the title of doc/go1.17: note deprecation of 'go get' for installing commands. The actual updated documentation (from the current draft release notes) says:

go get prints a deprecation warning when installing commands outside the main module (without the -d flag). go install cmd@version should be used instead to install a command at a specific version, using a suffix like @latest or @v1.2.3. In Go 1.18, the -d flag will always be enabled, and go get will only be used to change dependencies in go.mod.

I have some views on this. The first one is that this is a pretty abrupt deprecation schedule. Go will be printing a warning for only one release, which means six months or so. People who don't update to every Go version (or their operating system's packaged version doesn't) will go from 'go get' working normally to install programs to having it fail completely.

The bigger thing is that this particular deprecation is probably going to mean that fewer people use utilities and other programs written in Go. There are quite a lot of open source programs written in Go out there that currently say to install them with 'go get ...' in their READMEs and other resources. Starting with Go 1.18, that's going to fail with some sort of error message. Some of these programs are actively still being developed and maintained, and perhaps will update their documentation to talk about 'go install'. Others are effectively inactive and will not update, so in the future anyone following their instructions will fail and will probably give up, despite the program still being perfectly good.

(There's probably also general documentation out on the Internet that talks about using 'go get' to install Go programs, and that's also going to be inaccurate. Hopefully none of them turn up high on search engine results for how to install Go programs.)

On a philosophical level, I'm unhappy with Go deciding that it's going to rapidly discard something that was the way to do things for years and then probably the dominant way for several more years. Years of use build up a lot of momentum; the Go developers want that momentum to shift in months, in what is now a large community. People don't shift that fast, and I think the result is likely to be confusing and annoying people for quite some time.

(This is also in some sense a pointless shift. The Go developers could as well say that plain 'go get' would become an alias for 'go install ...@latest', instead of breaking it entirely. Yes, this approach is in some sense more correct; it's also less humane. It's robot logic (also).)

GoAndDeprecatingGoGet written at 00:17:53; Add Comment

2021-06-13

Rust 1.x seems to not always be backward compatible in practice

For a long time now, I've needed (or at least wanted) to simultaneously put a stress load on a test NFS fileserver and a stress CPU and memory load on another machine. Back in the old days on Linux, I would have done this by compiling the Linux kernel repeatedly, but in today's modern age the Linux kernel isn't big enough any more; it builds too fast and isn't big enough. I eventually settled on repeatedly building Firefox from source, partly because it's something I'm already familiar with. Specifically, on one of our Ubuntu 18.04 LTS machines I wound up building the then current release version of Firefox with a then current Rust version installed into a custom location with Rustup. Everything worked fine for a long time, and then one day I made the mistake of absently deciding to run 'rustup upgrade' on my custom Rust location. My Firefox builds immediately blew up with a series of Cargo errors and then later Rust errors.

I was unable to fix this and restore my Firefox build environment to a working state. In particular, it appears that older versions of Firefox (beyond some point I didn't try to determine precisely) can't be built with modern versions of Rust and Cargo (which is tightly coupled to Rust). You can to some extent patch Cargo.toml files so that Cargo will accept them, but the Rust errors were beyond my ability to deal with. My eventual solution was to obtain the current Firefox release, the current Rust release, and run Firefox's own 'build a custom virtual environment with all of your build dependencies' script. Someday this too will break, and in the mean time any performance measurements from before this transition aren't compatible with those after it, since I'm building different things with a different toolchain.

All of this was very startling to me, especially the Rust compile time errors. I'm used to Go, where there is a strong commitment to keeping old Go code building properly even as Go moves forward (but unfortunately less commitment to how the Go equivalent of Cargo works). Rust is evidently moving more aggressively than that to develop their language and are willing to invalidate older code if they feel strongly enough about this. When I am in a bad mood, I feel that means that 'Rust 1.x' is not really '1.x' as I would consider it. Either Rust has not really reached 1.0 yet or there have been a whole series of '2.0', '3.0' and so on Rust releases that are not actually labeled as such.

Update: Much of this is a Firefox problem, where Mozilla has apparently specifically set an environment variable to enabled unstable and changing Rust features in the stable Rust compiler. The effect is to tie specific Firefox versions to a narrow range of Rust stable compiler versions that implement the specific unstable features that Firefox's code expects.

I'm sure that Rust people have a whole series of good explanations for why all of this was necessary and why it's a good thing that I can no longer build older versions of Firefox (for instance, some of the Rust errors may be for things that were unsafe but not previously detected). All I can say is that as an outsider, the resulting experience is not a particularly good one.

(Rust is not the only language and build environment that can suffer from this; C compilers can give you issues if you tell them to abort on warnings.)

(This elaborates on some Tweets of mine at the time.)

Rust1BackwardIncompatibility written at 00:04:46; Add Comment

2021-05-30

Go 1.17 will still support the old GOPATH mode (as well as modules)

It's no secret that the Go developers want to get rid of GOPATH based development (also known as non-modular mode). At some point in the future, Go modules will be our only option, as covered in their blog entry Eleven Years of Go. At the time, and even today, the GOPATH wiki page said about the timing of this:

  • Go 1.17 (August 2021) will remove the GO111MODULE setting and GOPATH [development] mode entirely, using module mode always.

Back in December when I wrote about this, I said in an aside that the timeline might be too aggressive and GOPATH mode might not be removed in Go 1.17. That seems to be what's happened, because the current development tree still supports GOPATH mode and Go 1.17 almost certainly will too, given where we are in Go's development and release cycle.

According to the Go Release Cycle, the general plan of Go progress is that after Go 1.16 was released, development happened for three months (February, March, and April), then the release freeze started May 1st. At this point, from what I can tell we're well into the Go 1.17 freeze (there have been commits to the Go tree that refer to it being in the freeze, for example). I think it would be pretty unusual to land a big (conceptual) change like the removal of GOPATH mode now, and even more unusual after the beta of 1.17 is made in a few days at the start of June.

(I was going to say that the actual implementation could be done by hardwiring GO111MODULE to 'on' and ignoring the environment variable, which could make it a small change in terms of the amount of code, but after looking at the existing code a little bit I'm not sure it's that simple.)

My current Go issue searches haven't turned up any sort of tracking issue for removing GOPATH mode, so I can't tell if the Go team didn't have time to get to this before the freeze or if they decided to wait longer to give modules more time to percolate through the ecosystem of existing packages. Whichever they picked, I'm glad that they made this choice; I still build a variety of other people's Go programs that haven't been modularized yet, so the removal of GOPATH mode would have directly affected me.

PS: Presumably the GOPATH wiki page will be updated at some point, either to set the future date to Go 1.18 (February 2022) or just to leave it indeterminate.

Go117StillGopathMode written at 00:26:42; Add Comment

2021-05-24

Rust is a wave of the future

Recently I tweeted:

Once again I am being tempted to try out Rust, despite a relatively high confidence that I'm going to not like it. It's the wave of the future, though. Sooner or later I'm going to have to read and hack on Rust code.

More than five years ago, I wrote an entry on my feelings on using on Rust myself which I have since summarized to people as 'Rust cares about things that I don't any more'. I understand that Rust has much better ergonomics than it did in 2015 (when Rust 1.0 was just out), and so some of my other issues might be better.

But that's not why Rust is a wave of the future (in the manner of tweets, I said 'the wave'). Rust is a wave of the future because a lot of people are fond of it and they are writing more and more things in Rust, and some of these things are things that matter to plenty of people. There's Rust in your Python cryptography. There's Rust in Curl (sort of) (also). There's Rust in your librsvg. There's a lot of Rust in your Firefox. There are a growing number of command line tools written in Rust, including the excellent ripgrep. Someday there will probably be Rust in the Linux kernel. All of this is only growing with time, especially in the open source world.

All of this means that dealing with Rust is going to become increasingly necessary for me and a lot of people. We may not write it, but we need to be able to deal with programs that use it and are written in it. Our systems increasingly need a Rust build environment, I recently explored Rustup, and sooner or later I'm going to have to read and perhaps change Rust code in order to understand some problem we're having and perhaps fix it. Learning how to write some Rust myself is one way of getting the experience and knowledge necessary to do that well.

By the way, this isn't some nefarious conspiracy to force Rust on everyone. Some of it is that Rust really does solve problems that people have, but as I mentioned, a larger part of it is that lots of people genuinely like writing in Rust. When people like a programming language, things get written in that language and some of them become widely used or popular (or both). We saw this with Go in certain areas, and we're seeing this with Rust.

(There's an interesting question, one that I don't know the answer to, about why we haven't seen this for various other languages. Of course we have seen this for some; Javascript is large now and there are general tools written in it, and Java was large once upon a time. Some of it is likely that Rust can be airlifted into C and C++ programs relatively easily, but that doesn't explain programs written from scratch.)

RustInOurFuture written at 23:43:02; Add Comment

2021-05-14

The Bourne shell and Bash aren't the right languages for larger programs

In my recent entry on DKMS, I said some negative things about it being an almost 4,000 line long Bash script. In comments, a couple of people questioned this; for instance, Arnaud Gomes asked:

I seem to recall a post of yours a few years ago about the main difference between the shell and python being that a shell program is basically just glue between external commands. Isn't it what DKMS is?

(Arnaud Gomes is probably thinking of this entry on the gulf between shells and scripting language.)

There are three overlapping problems that almost always manifest in large shell scripts. The first problem is that shell scripts are more or less constrained to be all in a single file. 4,000 lines in a single file is hard to keep track of in any language; people do much better when we can chunk up complexity into smaller units.

The second is that the Bourne shell's oddities, limitations, and and outsourced language elements throw unnecessary obstacles in the way of expressing your program's logic. DKMS may run a lot of external programs, but as you can see from the manpage, it contains a lot of features and has a lot of complex logic to decide what to do to what. Pretty much any large shell script is going to contain a lot of logic, because there are very few situations where you spend hundreds or thousands of lines just running other programs and not doing much yourself. If you write this logic in shell scripts, you must express it within the inherent limitations of the shell and the result is not all that easy to follow, which makes it hard to maintain and expand over time.

(These days the Bourne shell does have arithmetic, at least. But figuring out how to use various random Unix programs to efficiently express and test parts of your logic is still a Turing tarpit.)

The third problem is that the Bourne shell lacks important language features that normally act to make coding errors less likely and contain and manage code complexity. The lack of these makes it harder and more error prone to express what you're doing, harder to keep track of what your code does, and contributes to making your logic harder to follow. Using Bash instead of plain Bourne shell fixes only some of these. One small and typical problem area is that the Bourne shell doesn't have named function arguments; this creates problems when reading the script and enables errors when writing it. A large problem area is that the shell has very limited data types, especially for function arguments. Plain Bourne shell has only strings (and the special list of arguments). Bash adds indexed arrays and 'associative arrays' (maps in Go, dicts in Python), but they can't be nested and passing them as function arguments is at best somewhat unnatural, which strongly limits their usefulness.

(Lacking data structures does a number of bad things, but one of them is that it makes it harder for shell scripts to gather data and keep track of things.)

PS: If you've never looked at a large shell script that's trying its best, it's worthwhile to read (or skim) part of the dkms script. It may be eye-opening about what doing large scale shell script programming forces you into.

BourneBadForLargeScripts written at 00:11:57; Add Comment

2021-05-12

The Bourne shell lets you set variables in if expressions

Today I was writing some Bourne shell code where I wanted to run a command, gather its output (if any), and see whether or not it succeeded. My standard form for this is:

res="$(... whatever ...)"
if [ "$?" -eq 0 ]; then ...
  ...
fi

(I could potentially use a pipeline to process the command's output, but there can be reasons to capture the output. Here I deliberately don't want to process any output the command may produce if it reports that it failed.)

When I ran my script through shellcheck (as I always do these days), it reported SC2181, which is, to quote:

SC2181: Check exit code directly with e.g. 'if mycmd;', not indirectly with $?.

This isn't the first time I've seen SC2181 and as always, I rolled my eyes at it because it seemed obviously wrong, because of course you can't merge these two lines together. But this time I went off to the Shellcheck repository to report it as an issue, and before I reported it as an issue I did a search, and that was when I discovered that Shellcheck was not wrong.

To my surprise, the Bourne shell allows you to perform command substitutions and capture the output in variables in if expressions. You really can write my two lines in a single one as:

if res="$(...)"; then
  ...
fi

You can even set variables to ordinary values if you want to, but 'var=$value' normally or always has an exit status of 0 so it's not very useful to put it in an if expression.

Despite this transformation being possible, I opted not to do it in my case because my command substitution was for a rather long command. In my opinion, it's just as wrong for readability to write:

if res="$( ... giant long command line ...)" ; then

as it would be to write a similar thing in C:

if (res = ptr->someLongFunction(with, many, arguments, that, sprawl)) {

In both cases things are much more readable and clear if you put things on two lines instead of trying to jam everything on one line. The Bourne shell should be written the way I initially did, and the C should be:

res = ptr->someLongFunction(with, many, arguments, that, sprawl);
if (res) {

Just because you can jam the two things together on one line doesn't mean that you should. Sometimes it's clearer on one line and sometimes it isn't.

(Of course ideally you wouldn't have so many arguments to a C function call. Unfortunately long command lines can be unavoidable in shell scripts, because some commands have very verbose options.)

(This elaborates on a tweet of mine, and also on the Fediverse.)

PS: This isn't the first Shellcheck warning that I routinely ignore.

BourneIfCanSetVars written at 23:04:21; Add Comment

2021-04-21

Go 1.17 will allow converting a slice to an array pointer (some of the time)

In Go, slices contain a reference to their backing array, whether this is an array that exists somewhere as a distinct variable of its own or is simply an anonymous array that was allocated to support the slice (that it can be either case can sometime make slices seem like two data structures in one). The presence of this backing array can lead to interesting memory leaks or just surprising changes to your slices, but in Go through 1.16 you can't get direct access to the backing array without using the reflect and unsafe packages. However, it's possible to do this safely in some cases, and there's been a long standing Go enhancement request that it be possible in issue #395 (which dates from 2009, shortly after Go was announced and well before Go 1.0 was released).

In Go 1.17 this will be possible, due to a series of changes starting with commit 1c268431f4, which updates the specification. The specification's description of this is straightforward:

Converting a slice to an array pointer yields a pointer to the underlying array of the slice. If the length of the slice is less than the length of the array, a run-time panic occurs.

It's harmless for the slice to be longer than the length of the array (although, as usual, this will keep the entire backing array of the slice alive). A longer slice than the array only means that your array won't be able to access all of the original slice's backing array.

The specification has some examples (all comments are from the specification):

s := make([]byte, 2, 4)
s0 := (*[0]byte)(s)      // s0 != nil
s2 := (*[2]byte)(s)      // &s2[0] == &s[0]
s4 := (*[4]byte)(s)      // panics: len([4]byte) > len(s)

var t []string
t0 := (*[0]string)(t)    // t0 == nil
t1 := (*[1]string)(t)    // panics: len([1]string) > len(s)

The discussion around s2 shows that this conversion doesn't (and can't) allocate a new array, making it guaranteed to be efficient. The cases of s0 and t0 are interesting; converting a non-empty slice to a 0-length array must give you a valid pointer, even though you can't do anything with it, but converting an empty slice gives you a nil.

At the moment there's no way to do this conversion with a check to see if it would panic, the way you can with type assertions. If you think you might have a too-short slice, you need to use an if.

The reflect package has also been updated to support conversions from slices to array pointers, in commit 760d3b2a16. If you're working with reflect for this, you should read the caution from the commit:

Note that this removes an invariant:

v.Type().ConvertibleTo(t) might return true, yet v.Convert(t) might panic nevertheless.

This is a fairly unavoidable consequence of the decision to add the first-ever conversion that can panic.

ConvertibleTo describes a relationship between types, but whether the conversion panics now depends on the value, not just the type.

(I believe that you can always use reflect.ArrayOf() to create a type with the right number of elements, and then reflect.PtrTo() to create the necessary 'pointer to array of size X' type.)

GoConvertSliceToArray written at 23:46:14; Add Comment

2021-04-07

Rust's rustup tool is surprisingly nice and well behaved

I don't code in Rust (it's not my thing), but I do periodically look into other people's Rust code, most recently Julia Evans' dnspeep. When I had reason to poke into dnspeep, I decided it was time I got a LSP-enabled GNU Emacs Rust environment. Following Robert Krahn's guide, one part of this rust-analyzer, which you can either download a binary of or build it yourself, which is the option I chose. On my Fedora 33 desktops, this was a simple git clone and a 'cargo xtask install --server'. On work's Ubuntu 18.04 servers, it turned out that the stock Ubuntu 18.04 version of Rust and Cargo were too old to build rust-analyzer.

(Our login servers are still running Ubuntu 18.04 because of still ongoing world and local events.)

When I poked around in my work Ubuntu account I discovered that at some point I'd gotten a copy of rustup, the Rust toolchain installer (this is foreshadowing for another entry). As a system administrator I have a reflexive aversion to black box tools that promise to magically install something on my system, partly because I don't want my system changed (also), and this is what the rustup website presents rustup as. However, some more reading told me that rustup was much better behaved than that, and this is how it proved to be in practice. I had apparently never initialized a rustup setup in my $HOME and it took me some time to figure out what to do, but once I did, rustup was straightforward and confined itself (more or less) to $HOME/.cargo.

(As I found out later, the starting command I probably wanted was 'rustup default stable'. I went through a more elaborate chain of things like 'rustup toolchain stable' and then figuring out how to make it the default.)

That rustup doesn't insist on mangling the system shouldn't surprise me by now, because this is relatively normal behavior for other modern language package managers like Python's pip, NPM, and Go. But it's still nice and appreciated. As a sysadmin and a Unix user, I appreciate things that will install and operate on a per-user basis, especially if they confine all of their activities to a single directory hierarchy in $HOME.

One thing about rustup is that all of the binaries it manages in ~/.cargo/bin are hardlinks to itself, and so their installation time never changes (presumably unless you update rustup itself). This was a bit disconcerting when I was playing around with toolchain settings and so on; I would change things in rustup, look in .cargo/bin, and nothing would seem to have changed. Behind the scenes rustup was changing stuff in $HOME/.rustup, but that was less visible to me.

I'm not sure how I got my rustup binary in the first place. I think I probably got it through the "other installation methods" chapter, undoubtedly with at least --no-modify-path. It's a pity there's no straightforward bootstrap with cargo that I can find mentioned, because various Linuxes supply Cargo and Rust but not necessarily rustup.

(This elaborates on a tweet of mine and will hopefully get this pleasant side of rustup to stick in my mind for any future needs. Because Ubuntu 18.04 is so long in the tooth by now, I also had to shave some other yaks to get a modern Rust LSP environment set up in GNU Emacs.)

RustupFairlyNice written at 21:51:55; Add Comment

(Previous 10 or go back to April 2021 at 2021/04/05)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.