Wandering Thoughts archives

2015-01-26

Some notes on keeping up with Go packages and commands

Bearing in mind that just go get'ing things is a bad way to remember what packages you're interested in, it can be useful to keep an eye on updates to Go packages and commands. My primary tool for this is Dmitri Shuralyov's Go-Package-Store, which lets you keep an eye on not only what stuff in $GOPATH/src has updates but what they are. However, there are a few usage notes that I've accumulated.

The first and most important thing to know about Go-Package-Store, and something that I only realized recently myself (oh the embarrassment) is that Go-Package-Store does not rebuild packages or commands. All it does is download new versions (including fetching and updating their dependencies). You can see this in the commands it's running if you pay attention, since it specifically runs 'go get -u -d'. This decision is sensible and basically necessary, since many commands and (sub) packages aren't installed with 'go get <repo top level>', but it does mean that you're going to have to do this yourself when you want to.

So, the first thing this implies is that you need to keep track of the go get command to rebuild each command in $GOPATH/bin that you care about; otherwise, sooner or later you'll be staring at a program in $GOPATH/bin and resorting to web searches to find out what repository it came from and how it's built. I suggest just putting this information in a simple shell script that just does a mass rebuild, with one 'go get' per line; when I want to rebuild just a specific command, I cut and paste its line.

(Really keen people will turn the text file into a script so that you can do things like 'rebuild <command>' to run the right 'go get' to rebuild the given command.)

The next potentially tricky area is dependent packages, in several ways. The obvious thing is that having G-P-S update a dependent package doesn't in any way tell you that you should rebuild the command that uses it; in fact G-P-S doesn't particularly know what uses what package. The easy but bruce force way to deal with this is just to rebuild all commands every so often (well, run 'go get -u' against them, I'm not sure how much Make-like dependency checking it does).

The next issue is package growth. What I've noticed over time is that using G-P-S winds up with me having extra packages that aren't needed by the commands (and packages) that I have installed. As a result I both pay attention to what packages G-P-S is presenting updates for and periodically look through $GOPATH/src for packages that make me go 'huh?'. Out of place packages get deleted instead of updated, on the grounds that if they're actual dependencies of something I care about they'll get re-fetched when I rebuild commands.

(I also delete $GOPATH/pkg/* every so often. One reason that all of this rebuilding doesn't bother me very much is that I track the development version of Go itself, so I actively want to periodically rebuild everything with the latest compiler. People with big code bases and stable compilers may not be so sanguine about routinely deleting compiled packages and so on.)

I think that an explicit 'go get -u' of commands and packages that you care about will reliably rebuild dependent packages that have been updated but not (re)built in the past by Go-Package-Store, but I admit that I sometimes resort to brute force (ie deleting $GOPATH/pkg/*) just to be sure. Go things build very fast and I'm not building big things, so my attitude is 'why not?'.

Sidebar: Where I think the extra packages come from

This is only a theory. I haven't tested it directly; it's just the only cause I can think of.

Suppose you have a command that imports a sub-package from a repository. When you 'go get' the command, I believe that Go only fetches the further imported dependencies of the sub-package itself. Now, later on Go-Package-Store comes along, reports that the repository is out of date, and when you tell it to update things it does a 'go get' on the entire repository (not just the sub-package initially used by the command). This full-repo 'go get' presumably imports either all dependencies used in the repository or all dependencies of the code in the top level of the repository (I'm not sure which), which may well add extra dependencies over what the sub-package needed.

(The other possible cause is shifting dependencies in packages that I use directly, but some stray packages are so persistent in their periodic returns that I don't really believe that.)

GoPackagesKeepingUp written at 01:53:43; Add Comment

2015-01-16

Node.js is not for me (and why)

I've been aware of and occasionally poking at node.js for a fairly long time now, and periodically I've considered writing something in it; I also follow a number of people on Twitter who are deeply involved with and passionate about node.js and the whole non-browser Javascript community. But I've never actually done anything with node.js and more or less ever since I got on Twitter and started following those node enthusiasts I've been feeling increasingly like I never would. Recently all of this has coalesced and now I think I can write down why node is not for me.

(These days there is also io.js, which is a compatible fork split off from node.js for reasons both technical and political.)

Node is fast server-side JavaScript in an asynchronous event based environment that uses callbacks for most event handling; a highly vibrant community and package ecosystem has coalesced around it. It's probably the fastest dynamic language you can run on servers.

My disengagement with node is because none of those appeal to me at all. While I accept that JavaScript is an okay language it doesn't appeal to me and I have no urge to write code in it, however fast it might be on the server once everything has started. As for the rest, I think that asynchronous event-based programming that requires widespread use of callbacks is actively the wrong programming model for dealing with concurrency, as it forces more or less explicit complexity on the programmer instead of handling it for you. A model of concurrency like Go's channels and coroutines is much easier to write code for, at least for me, and is certainly less irritating (even though the channel model has limits).

(I also think that a model with explicit concurrency is going to scale to a multi-core environment much better. If you promise 'this is pure async, two things never happen at once' you're now committed to a single thread of control model, and that means only using a single core unless your language environment can determine that two chunks of code don't interact with each other and so can't tell if they're running at the same time.)

As for the package availability, well, it's basically irrelevant given the lack of the appeal of the core. You'd need a really amazingly compelling package to get me to use a programming environment that doesn't appeal to me.

Now that I've realized all of this I'm going to do my best to let go of any lingering semi-guilty feelings that I should pay attention to node and maybe play around with it and so on, just because it's such a big presence in the language ecosystem at the moment (and because people whose opinions I respect love it). The world is a big place and we don't have to all agree with each other, even about programming things.

PS: None of this means that node.js is bad. Lots of people like JavaScript (or at least have a neutral 'just another language' attitude) and I understand that there are programming models for node.js that somewhat tame the tangle of event callbacks and so on. As mention, it's just not for me.

NodeNotForMe written at 23:06:08; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.