Wandering Thoughts

2020-05-13

Getting my head around what things aren't comparable in Go

It started with Dave Cheney's Ensmallening Go binaries by prohibiting comparisons (and earlier tweets I saw about this), which talks about a new trick for making Go binaries smaller by getting the Go compiler to not emit some per-type internally generated support function that are used to compare compound types like structs. This is done by deliberately making your struct type incomparable, by including an incomparable field. All of this made me realize that I didn't actually know what things are incomparable in Go.

In the language specification, this is discussed in the section on comparison operators. The specification first runs down a large list of things that are comparable, and how, and then also tells us what was left out:

Slice, map, and function values are not comparable. However, as a special case, a slice, map, or function value may be compared to the predeclared identifier nil. [...]

(This is genuinely helpful. Certain sorts of minimalistic specifications would have left this out, leaving us to cross-reference the total set of types against the list of comparable types to work out what's incomparable.)

It also has an important earlier note about struct values:

  • Struct values are comparable if all their fields are comparable. Two struct values are equal if their corresponding non-blank fields are equal.

Note that this implicitly differentiates between how comparability is determined and how equality is checked. In structs, a blank field may affect whether the struct is comparable at all, but if it is comparable, the field is skipped when actually doing the equality check. This makes sense since one use of blank fields in structs is to create padding and help with alignment, as shown in Struct types.

The next important thing (which is not quite spelled out explicitly in the specification) is that comparability is an abstract idea that's based purely on field types, not on what fields actually exist in memory. Consider the following struct:

type t struct {
  _ [0]byte[]
  a int64
}

A blank zero-size array at the start of a struct occupies no memory and in a sense doesn't exist in the actual concrete struct in memory (if placed elsewhere in the struct it may have effects on alignment and total size in current Go, although I haven't looked for what the specification says about that). You could imagine a world where such nonexistent fields didn't affect comparability; all that mattered was whether the actual fields present in memory were comparable. However, Go doesn't behave this way. Although the blank, zero-sized array of slices doesn't exist in any concrete terms, that it's present as a non-comparable field in the struct is enough for Go to declare the entire struct incomparable.

As a side note, since you can't take the address of functions, there's no way to manufacture a comparable value when starting from a function. If you have a function field in a struct and you want to see which one of a number of possible implementations a particular instance of the struct is using, you're out of luck. All you can do is compare your function fields against nil to see whether they've been set to some implementation or if you should use some sort of default behavior.

(Since you can compare pointers and you can take the address of slice and map variables, you can manufacture comparable values for them. But it's generally not very useful outside of very special cases.)

GoUncomparableThings written at 23:29:19; Add Comment

2020-05-04

The Go compiler has real improvements in new versions (and why)

When I wrote that I think you should generally be using the latest version of Go, I said that one reason was that new versions of Go usually include improvements that speed up your code (implicitly in meaningful ways), not just better things in the standard library. This might raise a few eyebrows, because while it's routine for new releases of C compilers and so on to tout better performance and more optimizations, these rarely result in clearly visible improvements. As it happens, Go is not like that. New major versions of Go (eg 1.13 and 1.14) often provide real and clearly visible improvements for Go programs, so that they run faster, use less memory, and soon will take up somewhat less space on disk.

My impression is that there are two big reasons that this happens in Go but doesn't usually happen in many other languages; they are that Go is still a relatively young language and it has a complex runtime (one that does both concurrency and garbage collection). Generally, Go started out with straightforward implementations of pretty much everything (both in the runtime and in the compiler), and it has been steadily improving them since. Sometimes this is simply in small improvements (especially in code generation, which sees a steady stream of small optimizations) and sometimes this is in much larger rewrites, such as the one that added asynchronous preemption of goroutines in Go 1.14 or the currently ongoing work on a better linker. Go's handling of memory allocation and garbage collection has especially seen a steady stream of improvements, sometimes major ones, such as the set covered in Getting to Go: The Journey of Go's Garbage Collector.

(And back in 2015, there was the rewrite of the compiler to have a new SSA backend (also), which unlocked significant opportunities for additional optimizations since then.)

Generally, other languages have had some combination of having a lot longer to mature and extract all of the straightforward optimizations from their compiler, having a simpler runtime environment that doesn't need as much development effort, or having a lot of very smart people working on them. Java, Javascript, and the Microsoft .NET languages all have complex runtimes, but they also have a lot of resources poured into their implementations, which means that they often improve at a faster rate than Go does (and they all pretty much started earlier). C and C++ compilers generally have simpler runtime environments that need less work and have also had a lot longer to optimize their code generation. What C compilers can already do is pretty spooky, so it's not terribly surprising that the improvements now are mostly small incremental ones. It will likely be a long time before Go gets to that level, if it every does (since there is a tradeoff between how fast you can compile and how much optimization you do, and Go values fast compile times).

GoRealImprovementsWhy written at 00:19:24; Add Comment

2020-04-27

I think you should generally be using the latest version of Go

At first I was going to write an entry about what versions of Go are packaged with various Linux distribution versions, FreeBSD, and so on, to go with my recent entry on Go on OpenBSD and the Go wiki's OpenBSD page, which directly lists the information for OpenBSD ports (OpenBSD 6.6 has Go 1.13.1). But the more I think about it, the less I think this should matter to you if you do a fair amount of work in Go or at least make significant use of Go programs that you regularly update and build from source yourself (even with 'go get'), as opposed to downloading pre-build binaries for things like Prometheus. Instead I think you should generally be using the latest released version of Go, even if you have to build it yourself, and mostly ignoring the version packed by your Unix distribution.

(Building Go from source is not that hard, especially on well supported platforms.)

The obvious current reason to use a current version of Go is that Go is in the middle of a big switch to modules. Relatively recent versions of Go are needed to work well with modules, and right now module support is improving in every new version. If you're using Go modules (and you probably should be), you want an up to date Go for the best support. But eventually everyone's packaged Go versions will be good enough for decent module support and this reason will fade away.

The first general reason to use a current version of Go is that new releases genuinely improve performance and standard library features. If you use Go significantly, you probably care about both of these, although you may not want to immediately jump on new patterns of Go code that are enabled by new standard library features. And Go minor releases fix bugs (sometimes including security issues), so you probably want to be on the latest minor release of your Go major release.

(Mention of minor releases brings up support. Currently, Go officially supports only the two most recent major releases, per Go's Release Policy. Bugs and security issues in either the Go compiler itself or the standard library probably won't be fixed for unsupported versions. Unfortunately, security issues do happen.)

The second general reason is that other Go packages and modules that you may want to use may themselves use (and depend on) new features in the standard library. Go's standard library documentation marks which Go version every new thing was introduced in, but it's up to people to both pay attention and decide if they care. Especially, a third party package that's under active development may not preserve much compatibility with versions of Go that are no longer officially supported.

(If you care about your own code building on old Go versions, my view is that you should be actively checking this, for example by routinely building it with the oldest Go version that you care about. It's all too easy to absently start using a new standard library feature that's more recent than you want.)

One implication of this is that the officially packaged version of Go shipped with long lived Linuxes, such as Ubuntu LTS, is likely going to get less and less useful to you over time. The most recent Go version available standard in Ubuntu 18.04 LTS is Go 1.10, for example; it's thoroughly out of support (and doesn't support modules), and probably any number of things no longer build with it.

GoVersionsMyView written at 22:47:42; Add Comment

2020-04-17

Some bits of grep usage are where I disagree with Shellcheck

I have come to very much like shellcheck and now automatically use it on any shell script I'm writing or revising (at least any scripts for work; personal ones I'm sometimes lazier with). Although some of its issues are a little bit picky, using it is good for me so I go along with them. Well, with one exception, because I cannot go along with Shellcheck's views on using egrep and fgrep.

If you've used Shellcheck on scripts, especially older ones, you have probably run into SC2196 and SC2197:

egrep is non-standard and deprecated. Use grep -E instead.
fgrep is non-standard and deprecated. Use grep -F instead.

Sorry, nope. Both egrep and fgrep are long standing programs and names for these versions of grep, and they will be present on any sane Unix until the end of time (and I don't mean 2038, the end of the 32-bit Unix epoch). I have a visceral and probably irrational reaction to the idea of using the POSIXism of 'grep -E' and 'grep -F' for them, and I have no intention of switching my scripts away from using fgrep and egrep.

(And grep already has enough flags to keep track of, including important ones for searching test files in GNU Grep.)

Fortunately modern versions of Shellcheck (including the one in Ubuntu 18.04) can disable these warnings for your entire script, as covered in the wiki's page on ignoring errors. Just put a line in the comment at the start of your script to turn them off:

#!/bin/sh
[...]
# Disable warnings about use of egrep and fgrep
# shellcheck disable=SC2196,SC2197

(I don't want to use a $HOME/.shellcheckrc for this, because I want our scripts to Shellcheck cleanly regardless of who runs Shellcheck.)

ShellcheckAndGrep written at 00:48:05; Add Comment

2020-03-22

Avoiding the 'dangling else' language problem with mandatory block markers

There is a famous parsing ambiguity in many programming languages known as the dangling else, where it's ambiguous which 'if' statement an 'else' is associated with. In C, for example, you can write:

if (a)
if (b)
   res = 10;
else
   res = 20;

Which if the else is associated with is somewhat ambiguous; is it the first or the second? (C provides an answer, as any language that allows this must.)

It's recently struck me that one way to avoid this in language design is to require some form of explicit block markers for the statements inside an if (and an else). In C terms, to disallow bare statements after if and else and always require '{' and '}'.

Go takes this approach with explicit block markers. In Go, an if requires a delimited block, not a bare statement, and else is similarly unambiguous. A lot of things in Go require blocks, in fact. Python also does this through its implicit blocks that are created through indentation; which if an else is associated with is both visually and syntactically unambiguous.

(Python allows writing simple 'if' clauses on one line, but not ones that involve nested ifs, and then you have to put the else on the next line.)

Another Unix language that avoids this ambiguity is the Bourne shell. Although we don't necessarily think about it all that often, the 'then' and 'fi' in Bourne shell if expressions are block markers, and they're mandatory:

if [ -x /usr/bin/something ]; then
  ...
fi

Although you can write this all on a single line, you can't leave out the markers. This is actually a somewhat tricky case, because 'else' is a peer with these two markers; however, the result isn't ambiguous:

if [ -x /something ]; then
if [ -x /otherthing ]; then
...
else
....
fi
fi

Since we haven't seen a 'fi', the 'else' has to be associated with the second 'if' instead of the first.

DanglingElseAndBlocks written at 02:35:23; Add Comment

2020-03-19

Sorting out Go's 'for ... = range ..' and when it copies things

I recently read Some tricks and tips for using for range in GoLang, where it said, somewhat in passing:

[...] As explained earlier, when the loop begins, it will copy the original array to a new one and loop through the elements, hence when appending elements to the original array, the copied array actually doesn't change.

My eyebrows went up because I'd forgotten this little bit of Go, and I promptly scuttled off to the official specification to read and understand the details. So here are some notes, because the issues behind this turn out to be more interesting than I expected.

Let's start with the basic form, which is 'for ... := range a { ... }'. The expression to the right of the range is called the range expression. The specification says (emphasis mine):

The range expression x is evaluated once before beginning the loop, with one exception: if at most one iteration variable is present and len(x) is constant, the range expression is not evaluated.

Obviously if the range expression is a function call, the function call must be made (once) and then the return value used in the range expression. However, in Go even evaluating an expression that's a single variable produces a copy of the value of that variable (in the abstract; in the concrete the compiler may optimize this out). So when you write 'for a, b := range c', Go (nominally) evaluates c and uses the resulting copy of c's current value.

(Among other consequences, this means that assigning a different value to c itself inside the loop doesn't change what the loop does; c's value is frozen at the start, when it's evaluated.)

As the additional bit of the specification explains, this doesn't happen if you use at most one iteration value and you're ranging over one of the small number of things where len(x) is a constant (the rules for this are somewhat legalistic). If you use two iteration variables, you always evaluate the range expression and make a copy, which is another reason for Go to prefer the single variable version (to go with nudging you to not copy actual values unless necessary).

However, things get tricky if you use pointers. Here:

a := [5]int{1, 2, 3, 4, 5}
for _, v := range a {
    a[3] = 10
    fmt.Println("Pass 1:", v)
}
// reset our mutation
a[3] = 4
// loop via pointer:
b := &a
for _, v := range b {
    b[3] = 10
    fmt.Println("Pass 2:", v)
}

In the second loop, what gets copied when the range expression is evaluated is the pointer, not the array it points to (note that b is not a slice, it's a pointer to an array). Go's implicit dereferencing of pointers means that the code for the two loops looks exactly the same, although they behave differently (the first prints the original array values before the mutation in the loop, the second mutates 'a[3]' before printing it).

On the one hand, this may be confusing. On the other hand, this provides a way to effectively sidestep all sorts of range expression copying (if you don't want it); all you have to do is pointerize your range expression, and almost nothing will care about the difference. Fortunately often you don't care about the copying to begin with, because making copies of strings, slices, and maps doesn't require copying the underlying data. The only thing that you can range over that's expensive to copy is an actual array, and directly using actual arrays in Go is relatively rare (especially when using real arrays can cause interesting errors).

If you do a 'copying' range over anything other than a real array (which is copied) or a string (which is immutable), you can still mutate the values from what you're ranging over in your range loop in a way that future iterations of your range loop will or at least may see. Probably you don't want to do this.

(This is the consequence of ranging over slices and maps not making a copy of the underlying data. Because your range copies the slice itself, shrinking or enlarging the original slice won't change the number of iterations. You can potentially change the number of iterations of a map inside of the loop, though.)

Probably I don't need to care about this range copying, at least from an efficiency perspective (I had better remember its other consequences). My Go code (and Go in general) only very rarely uses fixed size arrays, which are the only expensive thing to copy. Copying slices and maps is pretty close to free, and those are usually what I range over (apart from channels, which I consider a special case).

GoRangeCopying written at 00:55:56; Add Comment

2020-03-02

More or less what versions of Go support what OpenBSD releases (as of March 2020)

In light of recent entries, such as the situation with Go on OpenBSD, I've decided to write down what I can find out about what versions of Go theoretically work on what releases of OpenBSD. This is not actually tested for various reasons including the lack of suitable OpenBSD machines (for example, ones that are not in production use as important firewalls).

The Go people have a wiki page on OpenBSD that lists both the Go versions in OpenBSD's ports tree and the built from source versions that are supported on various OpenBSD releases. Additional information comes from the Go release notes for major versions; the release notes are potentially more accurate than the wiki, especially for old OpenBSD releases.

Going through both sources, I believe it breaks down this way for 32 and 64-bit x86:

  • OpenBSD 5.0 through 5.4 is supported by Go 1.1 and Go 1.2 only (the wiki seems to say it's supported by Go 1.0, but the Go 1.1 release notes specifically mention that OpenBSD support is new in it).

  • OpenBSD 5.5 requires at least Go 1.3 and last supports Go 1.6 or Go 1.7, depending on who you believe. The wiki says Go 1.7, but the Go 1.7 release notes say that it requires OpenBSD 5.6 or better because it uses the then new getentropy(2) system call.

  • OpenBSD 5.6 through 6.0 requires at least Go 1.4.1 due to at least issue #9102. You might be able to use Go 1.3 if you don't use net.Interface, but I'm not sure. OpenBSD 5.8 is last supported in Go 1.7, OpenBSD 5.9 is last supported in Go 1.8, and OpenBSD 6.0 is last supported in Go 1.10.

    OpenBSD 5.6 is the oldest version currently listed as having a version of Go in its ports tree.

  • OpenBSD 5.9 is the oldest release supported by Go 1.8, from the Go 1.8 release notes; this is apparently due to using new syscalls that OpenBSD was switching to.

  • OpenBSD 6.0 is the oldest release supported by Go 1.9, at least for cgo binaries, per the release notes. It's possible that non-cgo binaries from Go 1.9 would run on OpenBSD 5.9, but they won't run on anything before that due to those syscall changes in Go 1.8.

  • OpenBSD 6.1 requires at least Go 1.8 and is last supported by Go 1.10.

    Since Go 1.4 is the last version of Go that can be built without an existing Go compiler and it doesn't run on 6.1, building Go on OpenBSD 6.1 (and later) requires either getting an initial version of Go from the OpenBSD ports tree (if you can still find an operable ports archive for your old OpenBSD release) or bootstrapping from cross-compiled source.

  • OpenBSD 6.2 and 6.3 are listed as requiring at least Go 1.9 in the wiki, but I can't find anything in the release notes about why. The wiki claims that 6.2 and onward are still supported in Go 1.14 (the current Go version as I write this).

    OpenBSD 6.2 is also the oldest release supported by Go 1.11, per the release notes.

  • OpenBSD 6.4 (and later) requires at least Go 1.11, due to a new OpenBSD kernel thing covered in CL 121657.

OpenBSD 6.5 and 6.6 have no additional Go version restrictions; they're supported by 1.11 onward through Go 1.14 (currently). These are the two releases that are currently supported by OpenBSD (as mentioned in the depths of the FAQ). In a few months, OpenBSD will likely release OpenBSD 6.7, at which point 6.7 and 6.6 will become the two supported releases and Go will only officially support them.

Generally it looks like if you can stick with OpenBSD 6.2 or better, you're theoretically fine for Go support and can use the latest Go version (and thus build programs that require it, are module based, and so on). OpenBSD 6.2 dates from October of 2017, so many people will be covered by this. A future version of Go may stop working on older OpenBSD versions (the next release of Go will likely only officially support OpenBSD 6.7 and 6.6), but so far there seems to be nothing that would force this.

We're an unusual case in our range of OpenBSD releases. Currently we have machines running OpenBSD 5.1, 5.3, 5.4, 5.8, 6.2, 6.4, and 6.6. If we tried to use Go on all of them, we would have to restrict ourselves to Go features from Go 1.2 and also maintain multiple Go versions across the various range of machines (and multiple compiled binaries for Go programs).

GoWhatOpenBSDs-2020-03 written at 23:52:30; Add Comment

2020-03-01

The situation with Go on OpenBSD

Over in the fediverse, Pete Zaitcev had a reaction to my entry on OpenBSD versus Prometheus for us:

Ouch. @cks himself isn't making that claim, but it seems to me that anything written in Go is unusable on OpenBSD.

I don't think the situation is usually that bad. Our situation with Prometheus is basically a worst case scenario for Go on OpenBSD, and most people will have much better results, especially if you stick to supported OpenBSD versions.

If you stick to supported OpenBSD versions, upgrading your machines as older OpenBSD releases fall out of support (as the OpenBSD people want you to do), you should not have any problems with your own Go programs. The latest Go release will support the currently supported OpenBSD versions (as long as OpenBSD remains a supported platform for Go), and the Go 1.0 compatibility guarantee means that you can always rebuild your current Go programs with newer versions of Go. You might have problems with compiled binaries that you don't want to rebuild, but my understanding is that this is the case for OpenBSD in general; it doesn't guarantee a stable ABI even for C programs (cf). If you use OpenBSD, you have to be prepared to rebuild your code after OpenBSD upgrades regardless of what language it's written in.

(You may have to change the code too, since OpenBSD doesn't guarantee API compatibility across versions either. But the API is generally more stable than the ABI.)

If you have older, out of support OpenBSD versions and Go programs that you keep developing, you're fine as long as you make your Go code work with the oldest Go release that's required for your oldest OpenBSD. This mostly means avoiding new additions to the Go standard library, although sometimes performance improvements make new patterns in Go code more feasible and you'll have to not rely on them. You'll probably have to freeze and maintain your own binary copies of old Go releases for appropriate old OpenBSD versions. Go's currently ongoing switch from its old way of just fetching packages to modules may cause you some heartburn, but you can probably deal with this by vendoring everything.

(If you have Go programs that you don't keep developing, life is even easier because you can freeze your pre-build binaries for older OpenBSD versions as well. All you need to do is periodically build the latest Go for the latest OpenBSD and then build your programs with it.)

If you have older OpenBSD releases as well as current ones, and a Go program where you want to use the latest Go features (or where you want to keep up with dependencies and those dependencies do), you can still do this provided that you don't need to run new versions of your program on old OpenBSDs. If they're fine with the version they have, even if it's not as efficient or as feature-full as the latest one, then you can just freeze their pre-build binaries and only run the latest version of your program on the latest OpenBSDs with the latest Go.

And this gets us down to Prometheus, where we have the worst case scenario; you need to run the current version of your program on all OpenBSD versions and the current version uses features from very recent versions of Go, which only run on a few OpenBSD versions. This doesn't work, as discussed.

Sidebar: Cgo and OpenBSD

In general, avoiding cgo is likely to extend the range of OpenBSD versions that a Go program can run on, even a Go program that's built with the very latest version of Go. Although older versions of OpenBSD aren't officially supported by Go 1.14, it can cross-build a pure Go program that seems to work as far back as OpenBSD 5.8 (on 64-bit x86), although it fails on OpenBSD 5.4 with a 'bad system call' core dump. Using cgo gives you two problems; you can't cross-build, and you're likely to be much more restricted in what range of OpenBSD versions the resulting binaries will run on.

(Being able to cross-build from Linux makes it much easier to set up older Go versions to build binaries for old OpenBSD versions, since old Go versions will generally build fine even on new Linuxes.)

GoOpenBSDSituation written at 22:34:00; Add Comment

2020-02-28

One reason for Go to prefer providing indexes in for ... range loops

I was recently reading Nick Cameron's Early Impressions of Go from a Rust Programmer (via). One of the things Cameron did not like about Go was, well, let me quote directly from the article:

  • for ... range returns a pair of index/value. Getting just the index is easy (just ignore the value) but getting just the value requires being explicit. This seems back-to-front to me since I need the value and not the index in most cases.

Implicitly, this is about ranging over slices and arrays, rather than strings (for which indexing is a bit different) or maps.

I can't know the reasons behind this decision of Go's, but I think that one good reason for this approach is to encourage people towards using the value in place by accessing it through an index into the array or slice. The potential problem with using the value directly from the range statement is that when you use the value from a for ... range over a slice or array, you're actually using a copy of it. When you write:

for i, v := range aslice {
    fmt.Printf("%p %p\n", &v, &aslice[i])
}

The two pointer values are not the same (and v is the same pointer value all the time). The for loop variable v is a copy of aslice[i], not the same as it. Sometimes this copy will be basically free, for instance if you're ranging over a slice of something small. But if you're ranging over a slice of decent sized structures or something else that's big, the copy is not free at all and you may well want to use them in place in aslice.

(In some situations making a copy is not even correct, but you're not likely to encounter those with slices or arrays. For instance, you're probably not going to have an array of sync.Mutexes, or structures containing them. More likely you'd have an array of pointers to them, and copying pointers to mutexes (and structures containing them) is perfectly valid.)

By returning the index first (and making it easy to get the index without the value), Go quietly nudges you toward using the index to directly access the value, rather than forcing it to make a copy of the value for you. Go only has to materialize a copy of the value if you go out of your way to ask for it, and it hopes that you won't. At the very least, this nudge may get you to think about it.

(I also feel that it fits in with Go's general philosophy and approach. The natural starting point of a for ... range over an array or a slice is an index, so that's what you get to start with; making a copy of the value is an extra step that the compiler has to go out of its way to do, so it's not natural to have it the first thing produced.)

GoForRangeNudging written at 23:46:00; Add Comment

2020-02-12

Some git aliases that I use

As a system administrator, I primarily use git not to develop my own local changes but to keep track of what's going on in projects that we (or I) use or care about, and to pull their changes into local repositories. This has caused me to put together a set of git aliases that are probably somewhat different than what programmers wind up with. For much the same reason that I periodically inventory my Firefox addons, I'm writing down my common aliases here today.

All of these are presented in the form that they would be in the '[alias]' section of .gitconfig.

  • source = remote get-url origin

    I don't always remember the URL of the upstream of my local tracking repository for something, and often I wind up want to go there to do things like look at issues, releases, or whatever.

  • plog = log @{1}..

    My most common git operation is to pull changes from upstream and look at what they are. This alias uses the reflog to theoretically show me the log for what was just pulled, which should be from the last reflog position to now.

    (I'm not confident that this always does the right thing, so often I just cut and paste the commit IDs that are printed by 'git pull'. It's a little bit more work but I trust my understanding more.)

  • slog = log --pretty=slog

    Normally if I'm reading a repo's log at all, I read the full log. But there are some repos where this isn't really useful and some situations where I just want a quick overview, so I only look at the short log. This goes along with the following in .gitconfig to actually define the 'slog' format:

    [pretty]
        slog = format:* %s
    

  • pslog = log --pretty=slog @{1}..

    This is a combination of 'git plog' and 'git slog'; it shows me the log for the pull (theoretically) in short log form.

  • ffpull = pull --ff-only
    ffmerge = merge --ff-only

    These are two aliases I use if I don't entirely trust the upstream repo to not have rebased itself on me. The ffpull alias pulls with only a fast-forward operation allowed (the equivalent of setting 'git config pull.ff only', which I don't always remember to do), while ffmerge is what I use in worktrees.

(Probably I should set up a git alias or something that configures a newly cloned repo with all of the settings that I want. So far I think that's only 'pull.ff only', but there will probably be more in the future.)

I have other git aliases but in practice I mostly don't remember them (for instance, I have an 'idiff' alias for 'git diff --cached').

GitAliasesIUse written at 00:52:07; Add Comment

(Previous 10 or go back to February 2020 at 2020/02/07)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.