Some things that make shell scripts have performance issues

April 24, 2022

Yesterday I mentioned that one version of my shell script that probably shouldn't be a shell script had performance problems. Both versions of this script actually make a useful illustration of some things that make shell scripts slow and, along with it, some of the things you may have to do to make them fast.

One thing that does not make shell scripts slow is the basic Unix commands themselves that you use in shell scripts. Those Unix commands generally perform pretty well, and their processing speed is probably close to the fastest you could get if you wrote what they're doing in your language of choice. Your program is unlikely to improve on the sorting performance of sort, the text transformation performance of sed, and so on. And the shell itself generally performs internal things more than fast enough for most cases. Instead, what causes shell scripts problems is the cost of starting separate programs. Sed may transform text very fast and sort may sort data very fast, but starting sed or sort is comparatively expensive. The more times you start programs and the more programs you have to start for each thing you want to do, the slower your shell script will run.

(Programming languages that do everything internally may do each individual thing slower, but they don't pay the costs of starting however many external programs that your shell script needs.)

This causes two performance issues for shell scripts. The first, obvious issue is when you have to string together a whole sequence of Unix commands to get some result that would be straightforward in another programming language that had better text manipulation, more ability to read files, and so on. In turn this causes you to write shell scripts that use convoluted means to do things simply to keep down the number of programs being started. These convoluted means are faster than the straightforward option but make your script less readable. Shells try to deal with this by making more commands built in and by adding things like (integer) arithmetic so that you don't have to run external programs for common operations.

The second issue is that if your shell script deals with multiple things (for example, multiple entries in a Linux cgroup hierarchy), it's increasingly expensive to process them one by one because you repeatedly pay the program startup cost. The less you can do on a per-item basis, the better your script will perform, especially as the number of items grows. This leads to restructuring your script to try to do as much 'stream' processing as possible, even if this results in a peculiar program structure and often peculiar intermediate steps; alternately, you can rewrite things in a more awkward way that maximizes your use of shell builtins (where you don't pay a per-program cost). In a language without this per-item penalty, a program written in the natural style of processing an item at a time will still perform well.

Related to this, you can easily write a shell script that appears to perform well enough in your test environment but has clear problems when run for real in environments with significantly more items. It's not necessarily obvious how many per item programs are too many (or how many items your real environment will have), which makes this a hard issue to prevent in advance. Do you go out of your way to make your program do complex stream processing, possibly with no need in the end, or do you write the straightforward version now only to perhaps throw it away later? There's no good answer.

PS: One traditional way to deal with this in shell scripts is to lean on some program to assist your script that can swallow as much of the work into itself as possible. Awk is one common option chosen for this.

PPS: I don't like to admit it because I don't really like Perl, but this is one area where Perl feels like a pretty natural fit, partly because a lot of its basic operations are fairly close to the sort of manipulations you'd do in a shell script. My 'memdu' scripts might well look pretty much like their current state in Perl, just with better structure and performance, and I suspect that the transformation wouldn't be too hard if I hadn't forgotten all of the Perl I once knew.

Written on 24 April 2022.
« The temptation of writing shell scripts, illustrated
Sort of making snapshots of UEFI libvirt-based virtual machines »

Page tools: View Source, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Sun Apr 24 22:08:51 2022
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.