Reconsidering network authentication delays
If you use ssh to connect to a machine and mistype your password, you'll probably have to sit through a couple of seconds of delay before you can try again. In theory this delay is supposed to slow down large-scale password guessing attempts. In practice it's pointless, as anyone who is getting mass-ssh-probed can attest.
Delaying after failed login attempts started in the world of physical terminals, where it works because the supply of physical terminals you can try to log in on is finite (especially if you aren't Reed Richards). However, a new 'terminal' for a network login is only another TCP connection away. Mass-scanning programs can (and do) open multiple connections in parallel, so trying to rate-limit any particular connection in isolation is mostly pointless. If you want to slow down mass ssh attacks, you need to rate-limit at least by IP address. (Since scanning often seems to come from zombie networks, even this may not help much.)
So what delays on bad passwords in the OpenSSH server is really accomplishing is to slow down real users who make typos, while not particularly getting in the way of attackers. (Since it means processes hang around longer, it may even hurt you under heavy load.)
There's a use for a little bit of delay during password checking; you probably want to make sure it takes at least as long as starting up a new connection to the daemon. But more delay than that is probably not getting you anything except periodically annoyed users.