2015-01-16
Node.js is not for me (and why)
I've been aware of and occasionally poking at node.js for a fairly long time now, and periodically I've considered writing something in it; I also follow a number of people on Twitter who are deeply involved with and passionate about node.js and the whole non-browser Javascript community. But I've never actually done anything with node.js and more or less ever since I got on Twitter and started following those node enthusiasts I've been feeling increasingly like I never would. Recently all of this has coalesced and now I think I can write down why node is not for me.
(These days there is also io.js, which is a compatible fork split off from node.js for reasons both technical and political.)
Node is fast server-side JavaScript in an asynchronous event based environment that uses callbacks for most event handling; a highly vibrant community and package ecosystem has coalesced around it. It's probably the fastest dynamic language you can run on servers.
My disengagement with node is because none of those appeal to me at all. While I accept that JavaScript is an okay language it doesn't appeal to me and I have no urge to write code in it, however fast it might be on the server once everything has started. As for the rest, I think that asynchronous event-based programming that requires widespread use of callbacks is actively the wrong programming model for dealing with concurrency, as it forces more or less explicit complexity on the programmer instead of handling it for you. A model of concurrency like Go's channels and coroutines is much easier to write code for, at least for me, and is certainly less irritating (even though the channel model has limits).
(I also think that a model with explicit concurrency is going to scale to a multi-core environment much better. If you promise 'this is pure async, two things never happen at once' you're now committed to a single thread of control model, and that means only using a single core unless your language environment can determine that two chunks of code don't interact with each other and so can't tell if they're running at the same time.)
As for the package availability, well, it's basically irrelevant given the lack of the appeal of the core. You'd need a really amazingly compelling package to get me to use a programming environment that doesn't appeal to me.
Now that I've realized all of this I'm going to do my best to let go of any lingering semi-guilty feelings that I should pay attention to node and maybe play around with it and so on, just because it's such a big presence in the language ecosystem at the moment (and because people whose opinions I respect love it). The world is a big place and we don't have to all agree with each other, even about programming things.
PS: None of this means that node.js is bad. Lots of people like JavaScript (or at least have a neutral 'just another language' attitude) and I understand that there are programming models for node.js that somewhat tame the tangle of event callbacks and so on. As mention, it's just not for me.
Using systemd-run
to limit something's RAM consumption on the fly
A year ago I wrote about using cgroups to limit something's
RAM consumption, for limiting the resources
that make
'ing Firefox could use when I did it. At the time my
approach with an explicitly configured cgroup and the direct use of
cgexec
was the only way to do it on my machines; although systemd has facilities
to do this in general, my version could not do this for ad hoc
user-run programs. Well, I've upgraded to Fedora 21 and that's now
changed, so here's a quick guide to doing it the systemd way.
The core command is systemd-run
, which
we use to start a command with various limits set. The basic command
is:
systemd-run --user --scope -p LIM1=VAL1 -p LIM2=VAL2 [...] CMD ARG [...]
The --user
makes things run as ourselves with no special privileges,
and is necessary to get things to run. The --scope
basically
means 'run this as a subcommand', although systemd considers it a
named object while it's running. Systemd-run will make up a name
for it (and report the name when it starts your command), or you
can use --unit NAME
to give it your own name.
The limits you can set are covered in systemd.resource-control. Since systemd is just using cgroups, the limits you can set up are just the cgroup limits (and the documentation will tell you exactly what the mapping is, if you need it). Conveniently, systemd-run allows you to specify memory limits in Gb (or Mb), not just bytes. The specific limits I set up in the original entry give us a final command of:
systemd-run --user --scope -p MemoryLimit=3G -p CPUShares=512 -p BlockIOWeight=500 make
(Here I'm once again running make
as my example command.)
You can inspect the parameters of your new scope with 'systemctl
show --user <scope>
', and change them on the fly with 'systemctl
set-property --user <scope> LIM=VAL
'. I'll leave potential uses
of this up to your imagination. systemd-cgls
can be used to show
all of the scopes and find any particular one that's running this
way (and show its processes).
(It would be nice if systemd-cgtop
gave you a nice rundown of what
resources were getting used by your confined scope, but as far as I can
tell it doesn't. Maybe I'm missing a magic trick here.)
Now, there's a subtle semantic difference between what we're doing
here and what I did in the original entry. With cgexec
,
everything that ran in our confine
cgroup shared the same limit
even if they were started completely separately. With systemd-run
,
separately started commands have separate limits; if you start two
make
s in parallel, each of them can use 3 GB of RAM. I'm not sure
yet how you fix this in the official systemd way, but I think it
involves defining a slice
and then attaching our scopes to it.
(On the other hand, this separation of limits for separate commands may be something you consider a feature.)
Sidebar: systemd-run
versus cgexec
et al
In Fedora 20 and Fedora 21, cgexec
works okay for me but I found
that systemd would periodically clear out my custom confine
cgroup
and I'd have to do 'systemctl restart cgconfig
' to recreate it
(generally anything that caused systemd to reload itself would do
this, including yum
package updates that poked systemd). Now that
the Fedora 21 version of systemd-run
supports -p
, using it and
doing things the systemd way is just easier.
(I wrap the entire invocation up in a script, of course.)