Status reporting commands should have script-oriented output too

November 2, 2015

There's a lot of status reporting programs out there on a typical systems; they report on packages, on filesystems, on the status of ZFS pools or Linux software RAID or LVM, on boot environments, on all sorts of things. I've written before about these programs as tools or frontends, where I advocated for writing tools, but it's clear that battle is long since lost; almost no one writes programs that are tools instead of frontends. So today I have a more modest request: status reporting programs should have script oriented output as well as human oriented output.

The obvious reason is that this makes it easier for sysadmins to build scripts on top of your programs. Sysadmins do want to do this, especially these days where automation is increasingly important, and parsing your regular human-oriented output is more difficult and also more error-prone. Such script oriented output doesn't have to be very elaborate, either; it just has to be clear and easy to deal with in a script.

But there's a less obvious reason to have script oriented output; it's much easier to make script oriented output be stable (either de facto or explicitly documented as such). The thing about human oriented output is that it's quite prone to changing its format as additional information gets added and people rethink what the nicest presentation of information is. And it's hard to argue against better, more informative, more readable output (and in fact I don't think one should). But changed output is death on things that try to parse that output; scripts really want and need stable output, and will often break if they're parsing your human oriented output and you change it. When you explicitly split human oriented output from script oriented output, you can provide both the stability that scripts need and the changes that improve what people see. This is a win for both parties.

(As a side effect it may make it easier to change the human oriented output, because there shouldn't be many worries about scripts consuming it too. Assuming that you worried about that in the first place.)

(This is the elaborated version of a tweet and the resulting conversation with Dan McDonald.)

Comments on this page:

By Anon at 2015-11-02 06:49:41:

Sounds like you're asking for something like support for FreeBSD's libucl ( and )

By cks at 2015-11-02 11:03:31:

UCL seems to be solving a different problem, that of having a common way of configuring a lot of things. Here I'm talking about the output of status reporting commands, especially ones that report on dynamic things from a database, the live system state, and so on.

By cks at 2015-11-02 15:08:22:

That's close to what I'm looking for but not quite it, since JSON and XML are kind of hard to consume in shell scripts. Maybe they could add some sort of simple 'key: value' plaintext output as well, since the infrastructure is basically all there for it.

(Looking at their examples, the current 'text' output is clearly intended to be human readable instead of script-digestable.)

By Pat Sheen at 2015-11-03 05:05:27:

Totally agree, I've been looking for something like this for a long time. Understanding the philosophy of unix as pipelines of filters and tool it grates on me that the commonest of our tools fail to follow. The result is every sysadmin having to develop tools to parse the human-oriented output into something machine readable. And especially when the tool has the information is so easily accessible. I've decided that the only way to fix it is to have the tools rewritten and I so wish I was more adept at C that I could contribute. My goal would be a machine readable output, perhaps tsv, in a fairly normalised (DB-wise) form. And with the ability to add timestamps and hostnames.

By Edward Berner at 2015-11-03 11:44:53:

I remember reading a related blog entry from someone at Sun. Hmmm. Ah, here it is:

Creating Shell-Friendly Parsable Output

By Ewen McNeill at 2015-11-03 16:26:32:

You might find jq useful -- it's a "sed for JSON" which allows you to filter out bits you want from JSON on the command line. (You can also play with it online :-) )

I'm also reminded of the difference between git's "plumbing" and "porcelain" commands. The plumbing ones are designed for machine parsing; the porcelain ones are designed for human consumption. Considering most of the "porcelain" ones can be implemented in terms of the plumbing ones, it's a useful design split. I too wish that more tools would adopt that design approach. And/or make their "web UIs" implemented in terms of presentation of backend "RPC" queries that can be fetched separately via a script.


By dozzie at 2015-11-06 05:06:17:

Better yet, use App::RecordStream. jq looks like a poor cousin compared to that.

I developed a whole workflow around App::RecordStream (plus a makefile to reiterate all steps when necessary) for sysadmin's tasks, especially the ones that require synchronizing system's actual state with intended state, or the ones where I need to dig through data collected from several systems.

Written on 02 November 2015.
« One advantage of System V is that it was available
When setting up per-thing email addresses, make sure you can turn them off »

Page tools: View Source, View Normal, Add Comment.
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Mon Nov 2 00:23:48 2015
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.