Wandering Thoughts archives

2023-09-15

Insuring that my URL server and client programs exit after problems

I recently wrote about my new simple system to open URLs on my desktop from remote machines, where a Python client (on the remote server) listens on a Unix domain socket for URLs that programs (like mail clients) want opened, and reports these URLs to the server on my desktop, which passes them to my browser. The server and client communicate over SSH; the server starts by SSH'ing to the remote machine and running the client. On my desktop, I run the server in a terminal window, because that's the easy approach.

Whenever I have a pair of communicating programs like this, one of my concerns is making sure that each end notices when the other goes away or the communication channel breaks, and cleans itself up. If the SSH connection is broken or the remote client exits for some reason, I don't want the server to hang around looking like it's still alive and functioning; similarly, if the server exits or the SSH connection is broken, I want the remote client to exit immediately, rather than hang around claiming to other parties that it can accept URLs and pass them to my desktop to be opened in a browser.

On the server this is relatively simple. I started with my standard stanza for Python programs that I want to die when there are problems:

signal.signal(signal.SIGINT, signal.SIG_DFL)
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
signal.signal(signal.SIGHUP, signal.SIG_DFL)

If I was being serious I should check to see what SIGINT was initially set to, but this is a casual program, so I'll never run it with SIGINT deliberately masked. Setting SIGHUP isn't necessary today, but I didn't remember that until I checked and Python could change it.

Since all the server does is read from the SSH connection to the client, I can detect both client exit and SSH connection problems by looking for end of file, which is signalled by an empty read result:

def process(host: str) -> None:
  pd = remoteprocess(host)
  assert(pd.stdout)
  while True:
    in = pd.stdout.readline()
    if not in:
      break
    [...]

As far as I know, our SSH configurations use TCP keepalives, so if the connection between my machine and the server is broken, both ends will eventually notice.

Arranging for the remote client to exit at appropriate points is a bit harder and involves a hack. The client's sign that the server has gone away is that the SSH connection gets closed, and one sign of that is that the client's standard input gets closed. However, the client is normally parked in socket.accept() waiting for new connections over its Unix socket, not trying to read from the SSH connection. Rather than write more complicated Python code to try to listen for both a new socket connection and end of file on standard input (for example using select), I decided to use a second thread and brute force. The second thread tries to read from standard input and forces the entire program to exit if it sees end of file:

def reader() -> None:
  while True:
    try:
      s = sys.stdin.readline()
      if not s:
        os._exit(0)
    except EnvironmentError:
      os._exit(0)

[...]
def main() -> None:
  [the same signal setup as above]

  t = threading.Thread(target=reader, daemon=True)
  t.start()

  [rest of code]

In theory the server is not supposed to send anything to the client, but in practice I decided that I would rather have the client exit only on an explicit end of file indication. The use of os._exit() is a bit brute force, but at this point I want all of the client to exit immediately.

This threading approach is brute force but also quite easy, so I'm glad I opted for it rather than complicating my life a lot with select and similar options. These days maybe the proper easy way to do this sort of thing is asyncio with streams, but I haven't written any asyncio code.

(I may take this as a challenge and rewrite the client as a proper asyncio based program, just to see how difficult it is.)

All of this appears to work in casual testing. If I Ctrl-C the server in my terminal window, the remote client dutifully exits. If I manually kill the remote client, my local server exits. I haven't simulated having the network connection stop working and having SSH recognize this, but my network connections don't get broken very often (and if my network isn't working, I won't be logged in to work and trying to open URLs on my home desktop).

URLServerInsuringExits written at 22:04:18;


Page tools: See As Normal.
Search:
Login: Password:

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.