Wandering Thoughts archives

2019-05-17

My new favorite tool for looking at TLS things is certigo

For a long time I've used the OpenSSL command line tools to do things like looking at certificates and chasing certificate chains (although OpenSSL is no longer what you want to use to make self-signed certificates). This works, and is in many ways the canonical and most complete way to do this sort of stuff, but if you've ever used the openssl command and its many sub-options you know that it's kind of a pain in the rear. As a result of this, for some years now I've been using Square's certigo command instead.

Certigo has two main uses. My most common case is to connect to some TLS-using service to see what its active certificate and certificate chain is (and try to verify it), as well as some TLS connection details:

$ certigo connect www.cs.toronto.edu:https
** TLS Connection **
Version: TLS 1.2
Cipher Suite: ECDHE_RSA key exchange, AES_128_GCM_SHA256 cipher

** CERTIFICATE 1 **
Valid: 2018-04-17 00:00 UTC to 2020-04-16 23:59 UTC
Subject:
[...]

Certigo will attempt to verify the certificate's OCSP status, but some OCSP verifiers seem to dislike its queries. In particular, I've never seen it succeed with Let's Encrypt certificates; it appears to always report 'ocsp: error from server: unauthorized'.

(Some digging suggest that Certigo is getting this 'unauthorized' response when it queries the OCSP status of the intermediate Let's Encrypt certificate.)

Certigo can connect to things that need STARTTLS using a variety of protocols, including SMTP but unfortunately not (yet) IMAP. For example:

$ certigo connect -t smtp smtp.cs.toronto.edu:smtp

(Fortunately IMAP servers usually also listen on imaps, port 993, which is TLS from the start.)

My other and less frequent use of Certigo is to dump the details of a particular certificate that I have sitting around on disk, with 'certigo dump ...'. If you're dumping a certificate that's in anything except PEM format, you may have to tell Certigo what format it's in.

Certigo also has a 'certigo verify' operation that will attempt to verify a certificate chain that you provide it (against a particular host name). I don't find myself using this very much, because it's not necessarily representative of what either browsers or other sorts of clients are going to do (partly because it uses your local OS's root certificate store, which is not necessarily anything like what other programs will use). Generally if I want to see a client-based view of how a HTTPS server's certificate chain looks, I turn to the SSL server test from Qualys SSL Labs.

All Certigo sub-commands take a '-v' argument to make them report more detailed things. Their normal output is relatively minimal, although not completely so.

Certigo is written in Go and uses Go's standard libraries for TLS, which means that it's limited to the TLS ciphers that Go supports. As a result I tend to not pay too much attention to the initial connection report unless it claims something pretty unusual.

(It also turns out that you can get internal errors in Certigo if you compile it with the very latest development version of Go, which may have added TLS ciphers that Certigo doesn't yet have names for. The moral here is likely that if you compile anything with bleeding edge, not yet released Go versions, you get to keep both pieces if something breaks.)

sysadmin/InspectingTLSWithCertigo written at 22:53:21; Add Comment

One of our costs of using OmniOS was not having 10G networking

OmniOS has generally been pretty good to us over the lifetime of our second generation ZFS fileservers, but as we've migrated various filesystems from our OmniOS fileservers to our new Linux fileservers, it's become clear that one of the costs we paid for using OmniOS was not having 10G networking.

We certainly started out intending to have 10G networking on OmniOS; our hardware was capable of it, with Intel 10G-T chipsets, and OmniOS seemed happy to drive them at decent speeds. But early on we ran into a series of fatal problems with the Intel ixgbe driver which we never saw any fixes for. We moved our OmniOS machines (and our iSCSI backends) back to 1G, and they have stayed there ever since. When we made this move, we did not have detailed system metrics on things like NFS bandwidth usage by clients, and anyway almost all of our filesystems were on HDs, so 1G seemed like it should be fine. And indeed, we mostly didn't see obvious and glaring problems, especially right away.

What setting up a metrics system (even only on our NFS clients) and then later moving some filesystems from OmniOS (at 1G) to Linux (at 10G) made clear was that on some filesystems, we had definitely been hitting the 1G bandwidth limit and doing so had real impacts. The filesystem this was most visible on is the one that holds /var/mail, our central location for people's mailboxes (ie, their IMAP inbox). This was always on SSDs even on OmniOS, and once we started really looking it was clearly bottlenecked at 1G. It was one of the early filesystems we moved to the Linux fileservers, and the improvement was very visible. Our IMAP server, which has 10G itself, now routinely has bursts of over 200 Mbps inbound and sometimes sees brief periods of almost saturated network bandwidth. More importantly, the IMAP server's performance is visibly better; it is less loaded and more responsive, especially at busy times.

(A contributing factor to this is that any number of people have very big inboxes, and periodically our IMAP server winds up having to read through all of such an inbox. This creates a very asymmetric traffic pattern, with huge inbound bandwidth from the /var/mail fileserver to the IMAP server but very little outbound traffic.)

It's less clear how much of a cost we paid for HD-based filesystems, but it seems pretty likely that we paid some cost, especially since our OmniOS fileservers were relatively large (too large, in fact). With lots of filesystems, disks, and pools on each fileserver, it seems likely that there would have been periods where each fileserver could have reached inbound or outbound network bandwidth rates above 1G, if they'd had 10G networking.

(And this excludes backups, where it seems quite likely that 10G would have sped things up somewhat. I don't consider backups as important as regular fileserver NFS traffic because they're less time and latency sensitive.)

At the same time, it's quite possible that this cost was still worth paying in order to use OmniOS back then instead of one of the alternatives. ZFS on Linux was far less mature in 2013 and 2014, and I'm not sure how well FreeBSD would have worked, especially if we insisted on keeping a SAN based design with iSCSI.

(If we had had lots of money, we might have attempted to switch to other 10G networking cards, probably SFP+ ones instead of 10G-T (which would have required switch changes too), or to commission someone to fix up the ixgbe driver, or both. But with no funds for either, it was back to 1G for us and then the whole thing was one part of why we moved away from Illumos.)

solaris/OmniOSNo10GCost written at 01:11:05; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.