Wandering Thoughts

2021-05-08

It's pleasantly easy to install PyPy yourself (from their binaries)

The Python language server is the most substantial Python program I run on our servers, making it an obvious candidate to try running under PyPy for a speedup. The last time around, I optimistically assumed that I would use the Ubuntu packaged version of PyPy. Unfortunately, all of our login servers are still running Ubuntu 18.04 and 18.04 has no packaged version of PyPy 3. Since Python 3 is what I use for much of both my personal and our work code and you have to run pyls under the same version of Python as the code you're working on, this is a bit of a problem. So I decided to try out the PyPy procedures for installing a pre-built PyPy, with the latest release binaries.

This turned out to be just as easy and as pleasant (on Linux) as the documentation presented it. The tarball could be unpacked to put its directory tree anywhere (I put it in my $HOME/lib), and it ran fine on Ubuntu 18.04 and 20.04. I needed pip to install pyls, so I followed their directions to run './pypy-xxx/bin/pypy -m ensurepip', which downloaded everything needed into PyPy's tree and created a ./pypy-xxx/bin/pip program that I could use for everything else. As with virtualenvs, once I installed pyls through pip I could run $HOME/lib/pypy-xxx/bin/pyls and it all just worked.

In theory I think I could go on to use my $HOME/lib versions of PyPy3 and PyPy to create virtualenvs and then install things into those virtualenvs. In practice this is an extra step that I don't need for my purposes. Installing pyls and anything else I want to run under PyPy with './pypy-xxx/bin/pip install ...' already neatly isolates it in a directory hierarchy, just like a virtualenv does.

(Installing PyPy3 was so easy and straightforward that I decided I might as well also install the standard pre-built PyPy2, just so I had a known and up to date quantity instead of whatever Ubuntu had in their PyPy packages. Plus, even if I used the system version, I would have had to make a virtualenv for it. It took almost no extra effort to go all the way to using the pre-built binaries.)

All of this is really how installing pre-built software should work (and certainly how it's documented for PyPy). But I date from an era where it was usually much more difficult and pre-built software was often rather picky about where you put it or wanted to spray bits of itself all over your $HOME (or elsewhere). Right now it's still a bit of a pleasant shock when a pre-built program actually works this easily, whether it's PyPy or Rust.

python/PyPyEasyHandInstall written at 00:36:08; Add Comment

2021-05-07

Understanding OpenSSH's various options around keys and key algorithms

OpenSSH has quite a lot of things involving keys, key types, and key algorithms, with options to control them and ways to report on them and so on. It can be confusing when you read manpages for ssh, ssh_config, sshd, and so on (and it has regularly confused me). It turns out that OpenSSH has a great explanation in their OpenSSH Legacy Options documentation, so great that rather than paraphrase it I am just going to quote it (with some additional commentary):

When an SSH client connects to a server, each side offers lists of connection parameters to the other. These are, with the corresponding ssh_config keyword:

  • KexAlgorithms: the key exchange methods that are used to generate per-connection [symmetric encryption] keys
  • HostkeyAlgorithms: the public key algorithms accepted for an SSH server to authenticate itself to an SSH client
  • Ciphers: the ciphers to encrypt the connection
  • MACs: the message authentication codes used to detect traffic modification

When a SSH connection is established, the client and server use the SSH transport protocol to create the initial set of symmetric encryption keys that will encrypt the entire conversation going forward, and in the process the client verifies the server's host key (only one key out of the keys the server offers). If a server supports multiple host key types, in theory the client controls which one will be used based on the order of its HostkeyAlgorithms.

The KexAlgorithms 'key exchange methods' have nothing to do with the public key algorithms used for host key verification, although they do use related cryptographic techniques. How they work in specific is covered in various RFCs and other documentation linked from OpenSSH's Specifications page. This can be initially confusing since some of the KEX algorithm names look a bit similar to key type names (eg 'curve25519-sha256' and 'ecdh-sha2-nistp384').

Once the SSH transport protocol has been successful, the SSH client will go on to request user authentication, where some additional options come into play (again quoting the OpenSSH documentation):

  • PubkeyAcceptedKeyTypes (ssh/sshd): the public key algorithms that will be attempted by the client, and accepted by the server for public-key authentication (e.g. via .ssh/authorized_keys)
  • HostbasedKeyTypes (ssh) and HostbasedAcceptedKeyTypes (sshd): the key types that will be attempted by the client, and accepted by the server for host-based authentication (e.g via .rhosts or .shosts)

(This public key authentication is strongly protected against attacker in the middle attacks.)

A modern ssh command supports 'ssh -Q <thing>' to query various cryptography related things, and you can also use ssh_config and sshd_config option names as well. As far as I know, this doesn't look at your actual SSH configuration files; instead, it reports what your OpenSSH could support if you enabled everything. By extension it doesn't necessarily list public key algorithms in your preference order. As far as I know, there's no way to get OpenSSH to tell you the state of client or server configurations; you get to read your configuration files and anything else necessary on your systems.

If I'm right, this means that 'ssh -Q HostkeyAlgorithms', 'ssh -Q HostbasedKeyTypes', and so on will always give you the same list, even if you have them configured differently. I believe they're all aliases for 'ssh -Q key-sig'. Not all of the 'ssh -Q' features have ssh and sshd configuration option aliases, either, I believe including 'ssh -Q key' (and its kin like 'key-plain'), which give you the list of key types instead of key signature algorithms. Note that there really are three different types of ECDSA keys, contrary to what I thought yesterday (see the comments on that entry).

PS: You can set HostkeyAlgorithms in sshd_config on the server as well as in the client, and I guess you could use this to turn off offering "ssh-rsa" to clients right now (and at some point in the future you may need to use it to turn "ssh-rsa" back on, when OpenSSH deprecates this key signature scheme). Generally you control what host key algorithms you offer to clients by what keys you generate for sshd, since most keys have only one key signature algorithm (including ECDSA keys, as mentioned).

sysadmin/OpenSSHUnderstandingKeyOptions written at 00:19:22; Add Comment

2021-05-06

The different types of modern (2021) SSH keys (and some opinions)

Back in 2014 I wrote about what I knew about the then-current different types of SSH keys. Things have changed around a bit since then, so it's time for an update.

Modern versions of SSH support three different types of public key cryptography for common use; RSA, ECDSA, and Ed25519. Both ECDSA and Ed25519 use elliptic curve cryptography, while RSA is based on integer factorization. SSH once supported DSA public key cryptography, but it has been deprecated since the 7.0 release of OpenSSH in 2017 (search for 'ssh-dss'). OpenSSH supports FIDO/U2F hardware authenticators with ECDSA and Ed25519 keys since OpenSSH 8.2, and supports SSH key certificates for all key types.

To actually use SSH host and user keys, OpenSSH must also pick a signature scheme. Ed25519 keys only have a single signature scheme, but ECDSA and RSA keys have several different ones. OpenSSH is on the path to deprecate the "ssh-rsa" signature scheme, but this doesn't deprecate RSA keys in general. OpenSSH lists RSA keys in your authorized_keys and known_hosts file in a scheme independent way, but lists ECDSA keys in a scheme-dependent one. There is probably a cryptographic reason for this.

OpenSSH has supported ECDSA keys since OpenSSH 5.7, released at the start of 2011, and Ed25519 keys since OpenSSH 6.5, released at the start of 2014. The stronger RSA key algorithms that OpenSSH now wants you to use have been supported since OpenSSH 7.2, released in February of 2016; however, these were only officially standardized in an RFC in RFC 8332, released in March 2018. By now, support for these key types and signature schemes has propagated to every operating system release that uses OpenSSH and isn't a complete and utter zombie. However, support for them has definitely not propagated into all sorts of SSH servers and clients that are not using OpenSSH, and what support that has propagated may be partial support (some environments support using but not generating Ed25519 keys, for example, and may not yet support the additional RSA signature schemes).

(Unlike in the past, the Gnome Keyring SSH agent implementation apparently does now support Ed25519 keys, apparently since some time in 2018.)

I think that OpenSSH can have different preferred key signature algorithms for user keys and for host keys, and the preference order for them can differ between different OpenSSH versions and different people's builds of them. If I'm reading the official OpenSSH ssh_config manpage correctly, the current upstream OpenSSH preference is Ed25519, ECDSA, and then RSA. The current preferences on Linux distributions can be opaque, but I think that they generally prefer ECDSA over Ed25519 for host keys, but Ed25519 over ECDSA for user keys. Don't ask me.

Here today in 2021, I think the consensus is definitely that Ed25519 is the best OpenSSH key type, probably with ECDSA as your second choice. See, for example, the Arch Wiki on choosing the key type, which has a long discussion with links for further reading. I don't know if non-OpenSSH support for ECDSA keys is much different than non-OpenSSH support for Ed25519 keys, although ECDSA in OpenSSH was standardized much earlier (in RFC 5656, from 2009; see OpenSSH Specifications).

As a pragmatic matter, I think that most devices today that don't support Ed25519 keys will probably be using SSH implementations that only support very basic things, like RSA keys with the old "ssh-rsa" key signature algorithm. They may also only support old key exchange algorithms and ciphers. Fortunately OpenSSH has not yet actually removed the code to support things like 'diffie-hellman-group1-sha1', and you can re-enable them if necessary following the information in OpenSSH Legacy Options.

sysadmin/SSHKeyTypesII written at 00:07:58; Add Comment

2021-05-05

It's possible for Firefox to forget about:config preferences you've set

Firefox has a user preferences system, exposed through its 'Settings' or 'Preferences' system (also known as about:preferences) and also through the more low-level configuration editor (aka about:config). As is mentioned there and covered in somewhat more detail in what information is in your profile, these configuration settings (and also your preferences settings) are stored in your profile's prefs.js file.

You might think that once you manually set something in about:config, your setting will be in prefs.js for all time until you go back into about:config and change or reset it. However, there's a way that Firefox can quietly drop your setting. If you've set something in about:config and your setting later becomes Firefox's default, Firefox will normally omit your manual setting from your prefs.js at some point. For example, if you manually enable HTTP/3 by setting network.http.http3.enabled to true and then Firefox later makes enabling HTTP/3 the default (as it plans to), your prefs.js will wind up with no setting for it.

(You can guess that this is going to happen because Firefox will un-bold an about:config value that you manually change (back) to its initial default value. There's no UI in about:config for a preference that you've manually set to the same value as the default.)

For the most part this is what you want. It certainly acts to clean up old settings that are now no longer necessary so your prefs.js doesn't explode. However it can be confusing in one situation, which is if Firefox later changes its mind about the default. Going back to the HTTP/3 situation, if Mozilla decides that turning on HTTP/3 was actually a mistake and defaults it to off again, your Firefox will wind up with HTTP/3 off even though you explicitly enabled it. In some circumstances this can be confusing; you may remember that you explicitly turned HTTP/3 on, so why is it off now?

HTTP/3 is a big ticket item so you might have heard about Mozilla going back and forth, but Mozilla also changes the defaults for lots of other preferences over time. For instance, I've tweaked my media autoplay preferences repeatedly over time, and I suspect I've had Firefox updates default to some of them (removing my prefs.js settings) and then possibly change later.

If you have any settings that are really important to always be there, I think you may be able to manually create a user.js with them. Otherwise, this is mostly something to remember if you ever wind up wondering how something you remember explicitly setting has changed.

PS: To be clear, I think that Firefox is making a sensible decision (and probably the right decision) in not having a special state for 'manually set but to the same value as the default'. That would need a more complicated UI and more code for something that we almost never care about.

web/FirefoxVanishingPrefs written at 00:07:30; Add Comment

2021-05-03

Our future upgrade wave of Ubuntu 18.04 machines

We have long had a mix of Ubuntu versions. The short explanation is that most machines users log in to (our login servers and compute servers) get upgraded every LTS version, while other machines that are less accessible only get upgraded every other LTS (the longer version is How we handle Ubuntu LTS versions). Under normal circumstances, this would currently give us a relatively even mix of 20.04 machines and 18.04 machines. These aren't normal times.

The result of these abnormal times is that we have a lot more 18.04 machines and a lot fewer 20.04 machines than we normally would. None of our user login machines have been upgraded, our mail servers had to move to 18.04 instead of 20.04, and until late last year we lacked much experience with 20.04, so the path of least resistance was using 18.04 for new or upgraded machines because it was a known quantity. This is fine by itself, as Ubuntu 18.04 LTS is a perfectly good Ubuntu release.

Our future issue is that having a lot of 18.04 machines (some of them very critical ones) means that when Ubuntu 22.04 comes out next April, we'll have a lot of machines to upgrade in less than a year (since 18.04 will stop being supported at the end of April 2023). This is probably more unique machines than we've ever had to upgrade in one cycle, even if we assume that the machines users log in to are mostly simple to rebuild. Some of the machines, such as our fileservers, will take extensive testing all on their own.

If we get enough time in the office this summer we may try to upgrade our user login machines to 20.04, even though it's a year behind the usual schedule. We have a test user login machine built, although it hasn't seen much use, and that would let us slip their upgrade to 22.04 until late, perhaps even the summer of 2023.

Beyond that, we could upgrade some machines early, moving we normally wouldn't touch from 18.04 to 20.04 just so we have fewer to move to 22.04 later. This would also give us more of a spread between LTS versions for the long term; otherwise, if we just upgrade all of our 18.04 machines to 22.04 we'll have much the same problem in 2026, with a lot of 22.04 machines to suddenly move to 26.04.

I have no conclusions, but at least this is now an issue I'm going to be thinking about.

(I'm aware that in some places, planning for 2026 would be a laughable idea. It may be optimistic even for us, but I've had long term planning pay off before and in general we exist in an environment with long term stability, although there are somewhat more clouds on the horizon than usual.)

linux/Ubuntu1804FutureUpgradeWave written at 23:47:12; Add Comment

Understanding OpenSSH's future deprecation of the 'ssh-rsa' signature scheme

OpenSSH 8.6 was recently released, and its release notes have a 'future deprecation notice' as has every release since OpenSSH 8.2:

Future deprecation notice

It is now possible to perform chosen-prefix attacks against the SHA-1 algorithm for less than USD$50K.

In the SSH protocol, the "ssh-rsa" signature scheme uses the SHA-1 hash algorithm in conjunction with the RSA public key algorithm. OpenSSH will disable this signature scheme by default in the near future.

More or less a year ago I flailed around about what this meant. Now I think that I understand more about what is going on, enough so to talk about what is really affected and why. Helping this out is that since the OpenSSH 8.5 release notes, OpenSSH has had the current, more explicit wording above about the situation.

When we use public key cryptography to sign or encrypt something, we generally don't directly sign or encrypt the object itself. As covered in Soatok's Please Stop Encrypting with RSA Directly, for encryption we normally use public key encryption on a symmetric key that the message itself is encrypted with. For signing, we normally hash the message and then sign the hash (see, for instance, where cryptographic hashes come into TLS certificates). OpenSSH is no exception to this; it has both key types and key signature schemes (or algorithms), the latter of which specify the hash type to be used.

(OpenSSH's underlying key types are documented best in the ssh-keygen's manpage for the -t option. The -sk keytypes are FIDO/U2F keys, as mentioned in the OpenSSH 8.2 release notes. The supported key signature algorithms can be seen with 'ssh -Q key-sig'.)

What OpenSSH is working to deprecate is the (sole) key signature algorithm that hashes messages to be signed with SHA-1, on the grounds that SHA-1 hashing is looking increasingly weak. For historical reasons, this key signature algorithm has the same name ('ssh-rsa') as a key type, which creates exciting grounds for misunderstandings, such as I had last year. Even after this deprecation, OpenSSH RSA keys will be usable as user and host keys, because OpenSSH has provided other key signature algorithms using RSA keys and stronger hashes (specifically SHA2-256 and SHA2-512, which are also known as just 'SHA-256' and 'SHA-512', see Wikipedia on SHA-2).

Most relatively modern systems support RSA-based key signature schemes other than just ssh-rsa. Older systems may not, especially if they're small or embedded systems using more minimal SSH implementations. Even if things like routers from big companies support key signature schemes beyond ssh-rsa, you may have to update their firmware, which is something that not everyone does and which may require support contracts and the like. Unfortunately, anything you want to connect to has to have a key signature scheme that you support, because otherwise you can't authenticate their host key.

(OpenSSH Ed25519 keys also have a single key signature scheme associated with them, if you ignore SSH certificates; they are both 'ssh-ed25519'. Hopefully we will never run into a similar hash weakness issue with them. Since I just looked it up in RFC 8709 and RFC 8032, ed25519 signatures use SHA2-512.)

tech/OpenSSHAndSHA1DeprecationII written at 00:42:20; Add Comment

2021-05-02

Realizing one general way to construct symmetric ciphers

One of the areas of cryptography that's always seemed magical to me is symmetric ciphers. I believed that they worked, but it felt amazing that people were able to construct functions that produced random-looking output but that could be inverted if and only if you had the key (and perhaps some other information, like a nonce or IV). I recently read Soatok's Understanding Extended-Nonce Constructions, which set off a sudden understanding of a general, straightforward way to construct symmetric ciphers (although not all ciphers are built this way).

A provably secure general encryption technique is the one-time pad. One way to do one-time pad encryption on computers is to have your OTP be a big collection of random bytes (known by both sides) and then use the fact that 'A xor B xor A' is just B. The sender XORs their message with the next section of their OTP, and the receiver just XORs it again with the same section, recovering the original message (this is a form of XOR cipher). However, one-time pads are too big for practical use. What we would like is for each side to generate the one-time pad from a smaller, easier to handle seed.

What we need is a keystream, or more exactly a way to generate a keystream from an encryption key and probably some other values like a nonce (a one-time pad is a keystream that requires no generation). The keystream we generate needs to have a number of security properties like randomness and unpredictability, but the important thing is that our keystream generation function doesn't have to be invertible; in fact, it shouldn't be invertible. There are a lot of ways to do this, especially since it's sort of what cryptographic hashes do, and it's easy for me to see how you could possibly create keystream generation functions.

What I've realized and described is a stream cipher, as opposed to a block cipher. While I'd heard to the two terms before, I hadn't understood the cryptographic nature of this distinction, vaguely thinking it was only about whether you had to feed in a fixed size input block or could use more flexible variable-sized inputs. Now I've learned better in the process of writing this entry and learned something more about cryptography.

(I could probably learn and understand more about how it's possible to construct block ciphers if I read more about them, but there's only so far I'm willing to go into cryptography.)

tech/SymmetricCipherViaKeystreamXor written at 00:56:40; Add Comment

2021-04-30

Discovering outside people attempting to do dynamic DNS updates to us

We run our own primary DNS servers for DNS for our zones, both forward zones for domains we support and reverse PTR zones for our public subnets. For a number of reasons, including that our network layout means we need split-horizon DNS, we have what I would call a semi-stealth DNS master server; it's not listed in our NS records, but it is listed as the master name server in our SOA records, including (of course) for reverse PTR zones. External people are not supposed to query this machine, but sometimes they do anyway and sometimes this is the symptom of a configuration problem, like the machine appearing in a NS record. So today, I decided to take a look on our perimeter firewall to see who was trying what these days. Much like other times I've done similar things (or), there were interesting things underneath this rock.

The most surprising thing to me was that most of the traffic our firewall was rejecting wasn't DNS queries, it was DNS UPDATE attempts. When I grabbed packet dumps and decoded them in Wireshark, they turned out to be attempts by various remote IPs to add reverse lookup information for various host names. More specifically, this looked like Windows dynamic DNS updates. Some of the updates were for machines in .LOCAL, but others were for host names in various domains, most of which seem to be real domains. Some of the host names were generic, like 'laptop-<jumble>', but others had a clear organized naming scheme or were even idiosyncratic, like 'WheelOfFortune'.

(There's one domain name that doesn't exist, but the name and the geolocation of the IP address it comes from strongly suggests that it's probably an internal name or a mangling of it that escaped into the outside world. The domain is 'ABGTOWNSHIP.com', and the IP address is reported as being in Abington PA.)

After looking at this for a while, I've come up with a moderately horrifying theory for what is causing this: I think that some people out there in the world have set up their internal networks to use part of the University of Toronto's class B 128.100.0.0/16 IP address space (it wouldn't be the first time). When Windows machines on these internal networks decide to do a dynamic DNS update to register themselves, they look up the SOA for the PTR of their subnet through public DNS, determine that it's our semi-stealth master (which is listed as the SOA MNAME for our PTR zones), and send it to us.

Whether or not update requests reach us at all likely depends on which parts of 128.100.0.0/16 are being used internally. If they're using the subnet that our semi-stealth master sits on, the update requests would probably not make it onto the public Internet. This matches the pattern of PTR zones that I saw in my limited monitoring, in that they never tried to update that subnet's PTR zone.

(In one case I was able to get results that supported this. A machine on the same subnet as they were trying to do a PTR update for could not reach the HTTP port on the IP trying the update, while machines on other subnets could.)

Our parts of 128.100.0.0/16 are very low in the address space (in fact mostly right at the start of it), for historical reasons that you can probably guess at. This probably makes us unusually likely to be affected by this, since people usually pick subnets from the bottom of broad IP address ranges (witness the eternal popularity of 192.168.1.0/24).

(This is likely the same sort of thing as we saw before with CBL mis-listings. But the last I'd seen of that was in 2014, so I'd hoped it was all over by now. I should have known better, but perhaps there's at least fewer people doing this.)

sysadmin/DNSDynamicUpdatesToUs written at 22:36:35; Add Comment

There's plenty of our work that's not being done from home

By now, we have been working from home for more than a year due to ongoing world and local events. I've wound up with complicated and tangled feelings about working from home in general, but some things about the whole experience are very clear. Back at the start of July of 2020, I wrote about how the work we weren't able to do from home was accumulating. Some of that work has been done in the ten months since then, but a lot of it hasn't been, and in many ways we remain subtly impaired in our work.

We've more or less sorted out ordering physical hardware since last July, with stops and starts. But we're still significantly constrained on what we can order because it still can't be delivered to work. Only some things can be sensibly sent to people's homes and then only in mild quantities. Fortunately we're not in a situation where we have to buy hardware (or perhaps that's unfortunate).

Our use of Ubuntu 20.04 remains low, although it would be higher if not for an issue with the 20.04 version of Exim. Since we upgrade by reinstalling, often on new hardware, Ubuntu version upgrades need someone in the office and we simply haven't had that time for anything except relatively urgent machines. It would be nice to upgrade our user login servers and compute servers to 20.04 so they have reasonably current stuff (it's not the latest stuff any more), but it's far more important to do things like replace our 16.04 servers before 16.04 support ends (a deadline that we only just made).

Every one of us has a backlog of projects that need us to build physical machines. They aren't critical projects but they're ones we have to do sometime, and they mostly aren't getting done. I've gone into the office several times in the past couple of months, primarily to upgrade 16.04 machines but always with the intention of getting some work done on additional servers, and I've never been able to do it. Things always come up once I'm there.

That's another thing I didn't realize ten months ago. When we were in the office all the time, there were a whole collection of small background things that we took care of casually and in passing, the kind of thing that takes five or ten minutes at most. Naturally they aren't being done in passing any more, so a certain amount of my sporadic in-office time gets spent on them instead. We always seem to get less done in the office than we planned and it takes longer than we expected.

(One aspect of that is coordinating any work that needs more than a single person to do something, especially in person. It used to be simple for us to get something racked or the network configuration of a server shuffled around; now, not so much.)

On top of all of the things that are visibly not getting done, some things are getting done slower. We can get the work done while working from home, but it's somewhat slower and more awkward. To the extent that I even consciously notice it happening to me, it can feel like being nibbled by moths.

(I'm quite fortunate to be able to work on a lot of things in a relatively low-friction way, because I can still use my office workstation's VMs. I do a lot of things that require my test machines to be on our networks with good bandwidth, which would very much not work for a VM on my home machine.)

sysadmin/WorkNotDoneFromHomeII written at 00:19:35; Add Comment

2021-04-29

The shift from "two factor" to "multi-factor" authentication

While I wasn't paying attention, something interesting happened to authentication terminology; what was once called "two factor authentication" has now become "multi-factor authentication", or at least people now mostly talk about MFA instead of 2FA. Some sources will say that there's a difference between MFA and 2FA, while others consider MFA to just be the new name for 2FA (see, for example, how the PCI standard shifted to using "MFA").

Most of my exposure to 2FA and MFA comes from people talking about specific systems to do this. My impression and memory is that in the old days, the 2FA systems that people talked about were always based around specific tokens or items; you might have a 2FA system with Yubikeys or another one using RSA security tokens (or SMS text messages with authentication codes). The modern MFA systems I've been exposed to promise to authenticate users with a second factor somehow, but they have multiple options for the second factor; the university's chosen MFA system supports either an on-phone application or an OTP hardware token.

If people's model of '2FA' systems was that they were tied to a specific second factor or required a hardware second factor, then I can imagine that the rise of the smartphone made this a less and less attractive thing over time. My memory is that the first wave of using cellphones and smartphones for additional authentication was SMS text messages or calls, which are now considered not a good idea because they're too easy to intercept (cf). The second wave used various one-time password apps on your smartphone (which were sometimes interchangeable, for instance if they implemented RFC 6238 TOTP). When I got my smartphone I installed several such OTP apps, but never actually used any of them. At the time these seemed to be called two-factor authentication apps.

(I have a memory of reading arguments over whether an OTP app on your smartphone really counted as a 'second factor' for various reasons that I've now mostly forgotten.)

The modern smartphone app approach seems to be a custom application that interacts with a vendor specific backend network service in private ways. Microsoft has one, Duo has one, and so on, and presumably the end result is more secure than TOTP authenticators for various reasons. It certainly promotes more vendor lock-in (and more apps on your phone). But as demonstrated by the university, organizations don't (or can't) stop with only smartphone based authentication; they want the additional possibility of authentication through hardware tokens. So we get MFA systems that support multiple additional factors for authentication, depending on what the solution supports, what the person is enrolled for, and what they choose for any specific authentication attempt.

(Even with smartphones I believe most systems allow you to enroll more than one device.)

In any case, this shift in terminology from "two-factor" to "multi-factor" authentication is one that I find personally interesting, partly because it happened behind my back. Whether or not there's a real difference between them, it feels like they mean somewhat different things, with "multi-factor" being broader than "two-factor", and the shift itself is a sign that how people think about the whole area has changed. We've moved out to a wider and more complicated universe of authentication, one with more choices and probably more confusion.

PS: It's possible that these special MFA smartphone apps also require you to authenticate yourself to them with a fingerprint or some other biometric method that the device supports. With a password added, this would theoretically give the system a three-factor authentication; knowing the password, having your phone, and being the person with the right fingerprints.

tech/TwoFactorToMultiFactorAuthShift written at 00:21:50; Add Comment

(Previous 10 or go back to April 2021 at 2021/04/28)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.