(Probably) Why Bash imports functions from the environment

November 5, 2014

In the wake of the Shellshock issues, a lot of people started asking why Bash even had a feature to import functions from the environment. The obvious answer is to allow subshell Bashes to inherit functions from parent shells. Now, you can come up with some clever uses for this feature (eg to pass very complex options down from parents to children), but as it happens I have my own views about why this feature probably originally came to exist.

Let us rewind to a time very long ago, like 1989 (when this feature was introduced). In 1989, Unix computers were slow. Very slow. They were slow to read files, especially if you might be reading your files over the network from a congested NFS server, and they were slow to parse and process files once they were loaded. This was the era in which shells were importing more and more commands as builtins, because not having to load and execute programs for things like test could significantly speed up your life. A similar logic drove the use of shell functions instead of shell scripts; shell functions were already resident and didn't require the overhead of starting a new shell and so on and so forth.

So there you are, with your environment all set up in Bash and you want to start an interactive subshell (from inside your editor, as a new screen window, starting a new xterm, or any number of other ways). Bash supports a per-shell startup file in .bashrc, so you could define all your shell functions in it and be done. But if you did this, your new subshell would have to open and read and parse and process your .bashrc. Slowly. In fact every new subshell would have to do this and on a slow system the idea of cutting out almost all of this overhead is very attractive (doing so really will make your new subshell start faster).

Bash already exports and imports plain environment variables, but those aren't all you might define in your .bashrc; you might also define shell functions. If a subshell could be passed shell functions from the environment, you could bypass that expensive read of .bashrc by pre-setting the entire environment in your initial shell and then just having them inherit it all. On small, congested 1989 era hardware (and even for years afterwards) you could get a nice speed boost here.

(This speed boost was especially important because Bash was already a fairly big and thus slow shell by 1989 standards.)

By the way, importing shell functions from the environment on startup is such a good idea that it was implemented at least twice; once in Bash and once in Tom Duff's rc shell for Plan 9.

(I don't know for sure which one was first but I suspect it was rc.)


Comments on this page:

By lotheac at 2014-11-06 03:21:35:

You don't mention that subshells, ie. direct children of the parent shell, don't need to exec after fork. If the child process is a clone of the parent, it already has access to every function and variable defined in the parent, even if they aren't exported. I'm still having trouble understanding why this feature exists - I can't imagine the usecase for passing functions to indirect children (started by your editor perhaps) being so common that it would warrant a performance enhancement like this.

By opk at 2014-11-06 03:41:43:

You might find this old post from Rob Pike interesting. This might imply that the v8 shell also exported functions: http://marc.info/?l=9fans&m=111558921626149

I'm fairly sure you're right about performance being the original driver behind exportable functions. Along with overriding commands for test harnesses, performance is the other use for exported functions I've seen cited in the wake of shellshock.

The ksh and zsh solution to the problem is autoloadable functions. So you have FPATH (or fpath in array form) as an analogous variable to PATH and files containing functions are only read when used. Even on moderate hardware, these make a difference to startup times given a good collection of functions. For something like the zsh completion functions, it is massive. Zsh also allows for preparsing into wordcode files. Exported functions don't save on parsing the functions as you somewhat imply in your post.

Indeed, the Research Unix v8 shell had this feature too (as well as a shellshock-style bug, which they fixed).

One nice use could be to use find . -exec myshellfn.

Another use could be to transport shell functions over ssh. Not sure everyone thinks that is a good idea, but I'd like it.

By cks at 2014-11-06 12:20:24:

lotheac: I think I was unclear in my entry. By 'subshell' I didn't mean a directly fork()'d copy of the main process but a new exec()'d shell process that is a descendent of your main shell. In many environments there are a number of these and they are created quite frequently (especially back in 1989, when people working in graphical Unix environments spent most of their time in shells in xterms instead of in a browser).

Beyond being a performance improvement, this also strikes me as natural and Unixy. If the shell can export things into the environment so that future descendent shells can import them, why restrict that to plain variables? Allow it to export and import everything instead.

(There are potential issues with this, but at least rc sidestepped them by having an explicit flag that said 'do not import functions from the environment'. Rc scripts almost always use this flag in their '#!' line. I suspect that the V8 shell had a similar option.)

Written on 05 November 2014.
« The weakness of doing authentication over a side channel
Porting to Python 3 by updating to modern Python 2 »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Nov 5 23:31:47 2014
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.