Shell scripts should not use absolute paths for programs

July 13, 2009

There is a certain habit in shell scripts of referring to uncommon programs by their absolute path; for example, if you need to run lsof, people will write '/usr/sbin/lsof ....' in their shell script. We do a certain amount of that here, and then recently one of our shell scripts started reporting:

netwatch: line 15: /usr/sbin/lsof: No such file or directory

You see, you shouldn't do this, because every so often Unix vendors change where they put commands (or, in multi-vendor environments, two vendors disagree about it). If you used hard-coded paths, your script just broke.

(In this case, Ubuntu 6.06 put lsof in /usr/sbin and Ubuntu 8.04 moved it to /usr/bin, probably on the sensible grounds that it's useful for ordinary users too.)

The right way to do this is to add the directory you expect the command to be in to your script's $PATH and then just invoke the command without the absolute path. If the command gets moved, well, hopefully it will be to somewhere else on $PATH (as it was in this case), and your script will happily keep working. Better yet, this way your script can work transparently across different Unix environments without having to be changed or conditionalized; just add all of the possible directories to your script's $PATH and be done with it.

(This does point out that the Bourne shell could do with a simple way of adding something to your $PATH if it isn't already there.)


Comments on this page:

From 75.152.155.177 at 2009-07-13 02:17:40:

What if two same-name scripts are on the path ? The undesired one may get executed.

From 173.70.22.84 at 2009-07-13 06:40:53:

I've really got to side with the explicit path declarations. Yes, in a heterogeneous environment, the paths can change between machines, but if you're going to be running the same shell script in different environments, your scripts can be bullet-proofed, either by making executables into variables, then defining their location at the top of the script based on an OS check (recommended) or by maintaining multiple shell scripts for each OS build (not recommended).

If you don't use absolute paths, you're at the mercy of your environmental variables being consistent across sessions. It doesn't take very many "ssh foomachine /home/user/script.sh" errors or cron jobs (which don't always include the same PATH variable you know and love) failing before you start to explicitly state your paths.

Heck, I wrote a script this weekend where I got so sick of not having my environmental variables in place that I just included /etc/profile. Oracle was involved, so hopefully I can be forgiven.

Matt Simmons
http://www.standalone-sysadmin.com

From 64.193.21.117 at 2009-07-13 09:39:28:

Rather than abandoning explicit paths, would it not be better to use:

$LSOF='/usr/sbin/lsof';if ( -e $LSOF ) then 'run command' else echo "script failed: lsof not in the right place";fi

(sorry if my syntax is off, it's Monday and I've been gone a week)

-Rick Buford

By cks at 2009-07-13 11:11:31:

If you don't use absolute paths, you're at the mercy of your environmental variables being consistent across sessions.

This is only true if your script does not set (or augment) $PATH itself. If your script always insures that important directories are in its $PATH (either by adding them at the front or end, or just by setting $PATH completely), you are fine.

What if two same-name scripts are on the path ? The undesired one may get executed.

If you have a Unix vendor that has both a /usr/bin/PROG and a /usr/sbin/PROG (or the like) and they're different, my opinion is that you have more problems than just broken scripts. Really, you need a new vendor, one that's sane.

(This is where I admit that Fedora and RHEL are sort of this way, which is why you want to have /sbin on the path before /usr/bin.)

As for checking for whether or not lsof is present before trying to run it: that doesn't make your script work, it just changes the error messages that cron (or whatever) will email you.

From 97.65.201.233 at 2009-07-13 12:27:23:

If you upgrade your distro or are moving a script to a new box, it's fairly safe/sane to assume that scripts are going to break, whether because a file has moved to a new path location or otherwise. For pure sanity purposes I write with hard-coded paths. That way I know I'm using exactly the program and version I intended to use rather than whatever might be hidden away somewhere in a PATH statement. It's not been entirely unusual for me to log in to a box and find that it happens to have the same program installed twice, different versions, one in /bin and one in /usr/local/bin for example.

I don't see a solid argument against hard-coded paths here, to be honest, so I'll stick with hard-coding paths and just deal with the breakages when they occur.

By Dan.Astoorian at 2009-07-13 14:34:11:

If your script always insures that important directories are in its $PATH (either by adding them at the front or end, or just by setting $PATH completely), you are fine.

There are cases where this is problematic; e.g., a shell script which is a wrapper for a user application, and the user expects the application and its children to have the path that was there when it was fired up.

If you have a Unix vendor that has both a /usr/bin/PROG and a /usr/sbin/PROG (or the like) and they're different, my opinion is that you have more problems than just broken scripts. Really, you need a new vendor, one that's sane.

What about a vendor that has both a /usr/bin/ps and a /usr/ucb/ps?

Personally, I've had more scripts break because they got moved to a machine that had an incompatible version of a program (sometimes in /usr/local/bin/, perhaps because the native OS doesn't provide that program) than because the machine had a compatible version at a different path.

I've found that the case where the path is missing is usually a lot easier to troubleshoot than the case where the programmer (i.e., I) assumed that awk is always awk. At least when I change the path to one that does exist and the script still doesn't work, I'm considerably less surprised.

I've also seen scripts work for me but break when my users ran them because the user had $HOME/bin at the beginning of $PATH, because they use a different version of some program that the script called; so if you're going to rely on things being in $PATH, you'd better be keeping a tight leash on what it contains.

--Dan

From 66.31.100.198 at 2009-07-13 15:33:33:

This is the stuff that scares me as a SysAdmin. The idea that someone's writing scripts that don't call specific binaries but rely on paths and, what might be described as "chance".

I got burned once too many times and resort to hard wiriing all binaries at the top of the script. I use simple variables as in the previous comments "$LSOF" and reuse that through out the script. This supports the "define once, use many" approach to programming to keep from screwing things up unnecessarily.

A friend suggested to me that if I did my job right I should only have to define PATH within my script and that'd assure all paths are correct. I duct-taped his mouse for this and glued down the phone. But then I considered this as an alternative approach but I've not hammered on the idea enough to find out what the soft spots might be.

Until then, I will continue to use my "old school" approach of defining everything.

From 82.95.233.55 at 2009-07-14 15:34:39:

If you're writing scripts (shell, perl, whatever) you should also be checking the return values of all the actions in the script and trapping errors as needed.

One must never trust anything. Hard disks fill, cd'ing to a different directory fails, files 'disappear'. I prefer writing perl scripts because it makes it easy to check if something has worked or not, like chdir("/path/to/dir") or die "cannot chdir("/path/to/dir"): $!; where $! holds the error that happened.

If things really are important, then you need to check that the script you wrote executed correctly with nagios, for instance. If it doesn't you get an alert.

Writing scripts basically means being paranoid and assuming that anything will go wrong :-)

From 212.24.143.70 at 2009-07-15 03:24:25:

not using absolute paths could be security risk too. because when someno misses setting up correctly sudo for example (path), then the user who could use it with sudo, could inject his own commands.

i think that every script should have configuration part, where commands are binded to some variables. and the instalation script for it (make, packager, etc) should be responsible to set correct commands to these variables. this could solve linux and solaris command differences too.

-jhr.

By cks at 2009-07-15 16:54:25:

Yes, if you're worried about operating in a hostile environment you need to control the paths of what you run (among a lot of other things). But you can do that just as well by setting $PATH in your script as by using absolute paths for everything.

Written on 13 July 2009.
« A brief history of NFS server access restrictions
Some stuff on NFS access restrictions »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Mon Jul 13 00:47:56 2009
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.