Wandering Thoughts

2023-03-31

Exploiting (or abusing) password fields for Multi-Factor Authentication

I've recently been looking into how people add Multi-Factor Authentication (MFA) to their OpenVPN systems, both using commercial solutions and home grown ones. One of the things that makes this difficult is that I believe the OpenVPN authentication protocol is old fashioned enough that it doesn't provide for multi-step interaction. Instead, clients can send either or both of a TLS client certificate and a username plus password pair to the server, and the server gets to decide. However, common OpenVPN server software allows you to plug in your own code to do the user and password authentication, and so it turns out that people have used this to add MFA.

The simpler approach is one taken by a particular commercial MFA vendor I'm not going to name. With them, your OpenVPN server has to require both client certificates and a username and password. Their plugin ignores the username (instead using the TLS client certificate's Common Name) and treats the 'password' as what you'd enter in response to their MFA prompt, which is either a one-time passcode or a MFA authentication method like 'push authenticate with my first registered phone'. Presumably their plugin then waits for the MFA authentication to be approved, denied, or time out before it returns a 'yes/no' answer to OpenVPN.

The more complicated approach I've seen is to embed the MFA answer as part of the 'password' that clients supply. People have a regular username and password, and also an additional MFA authentication with something like TOTP. To authenticate to your OpenVPN, the password you actually send concatenates the two together, for example '<regular password>:<TOTP code>'. The OpenVPN authentication plugin then cracks the password apart and verifies the two pieces separately.

What reading about all of this has made me realize is that any time the client sends a plain text password to the server, it can be exploited to do MFA. Generally you need the server to have some sort of pluggable authentication, but many servers have that, either explicitly or because they use the system authentication methods and those are built on PAM. Now, no one said that this is necessarily the best idea or the best way to implement MFA (and indeed the OpenVPN version comes with some caveats), but you aren't always dealing with an ideal situation. You may be stuck with either software or an entire protocol that just doesn't have the idea of relatively arbitrary back and forth challenges in the middle of authentication. If it sends plain text passwords, you can probably make it work if you try hard enough.

(In retrospect this seems obvious, but I guess I wasn't thinking perversely enough before I started looking into MFA for OpenVPN.)

sysadmin/ExploitingPasswordsForMFA written at 22:10:48; Add Comment

2023-03-30

Giving Gnome-Terminal some margins makes me happier with it

For reasons outside the scope of this entry, I've recently been giving Gnome-Terminal more of a try as my secondary terminal program, instead of urxvt (my primary terminal program remains xterm). Gnome-Terminal is a perfectly fine default terminal program and so most of this has gone perfectly well. But the more I used gnome-terminal, the more I found myself having problems with how by default it runs the text almost right up against the edge of the window (on the left) and the scrollbar (on the right). I found that gnome-terminal's lack of margins made it harder for me to read and scan text at either edge.

(This is an interesting effect because xterm also runs text up to about the same few pixels from the left and right edges, but somehow xterm comes off as more readable and I've never felt it was an issue. Some of it may be that xterm has a quite different rendering for its scrollbar, one that I think creates the impression of more margin than actually exists, and by default puts it on the left side instead of the right. And of course I usually have much more text up against the left side than the right.)

Fortunately it turns out that this is fixable. Gnome-terminal is a GTK application and modern GTK applications can be styled through a fearsome array of custom CSS (yes, the web thing, it's everywhere and for good reason). Courtesy of this AskUbuntu question and its answers, I discovered that all you need is ~/.config/gtk-3.0/gtk.css with a little bit in it:

VteTerminal,
TerminalScreen,
vte-terminal {
     padding: 4px 4px 4px 4px;
     -VteTerminal-inner-border: 4px 4px 4px 4px;
}

The 4px is experimentally determined; I started with the answer's 10px and narrowed it down until it felt about right (erring on the side of more rather than less space).

I've only tested this with gnome-terminal, so it's possible that it will make other VTE based terminal programs unhappy (although I tried xfce4-terminal briefly and it seemed okay). However, in gnome-terminal it makes me happy.

(In the way of the modern world, you need this gtk.css on every remote machine that you intend to run gnome-terminal on with X over SSH. Conveniently, in my case this is effectively none of them; we don't install gnome-terminal on our Ubuntu servers any more. We do install xterm.)

linux/GnomeTerminalBiggerMargins written at 23:24:39; Add Comment

2023-03-29

The case of the very wrong email Content-Transfer-Encoding

Over on the Fediverse, I shared a discovery from our mail logs:

In the 'that's not how you do it' category, spotted in our email reject logs today:

Content-Transfer-Encoding: amazonses.com

(We rejected it for having an absurdly long line, over 200,000 bytes, which appears to have been almost all of the message.)

The MIME Content-Transfer-Encoding header is supposed to tell you the encoding of the MIME part in question, including the implicit top level part of the email. Typical values are things like '7bit', '8bit', 'quoted-printable', or 'base64'. Needless to say, this email's C-T-E is complete garbage, and a picky email client would say that it couldn't decode the message because it doesn't understand the 'amazonses.com' encoding.

(I suspect that real clients treat this as an unset C-T-E and either assume they have text or try to guess among the options.)

All of this email appears to be spam, of course. And the message has other anomalies besides the absurdly long lines in the body. They seem to have a consistent envelope sender domain (that I'm not going to mention for reasons), but the headers follow an unusual pattern. If the email is sent to 'USER@<our-domain>', all of the samples I've checked have the following header setup:

From: [...] <support@USER.net>
Sender:USER@<our-domain>
Message-ID: <...javamail.tomcat@pdr8-services-05v.prod..org>
Content-Type: text/html
Content-Transfer-Encoding: amazonses.com

That 'Sender:' header is malformed, and obviously you aren't supposed to have a constant Message-ID. Obviously some of the addresses are made up (and forged, for the sender address); many of the From: domains probably don't even exist. While the envelope sender domain stays constant, the local addresses do vary. The current sending IP has also been consistent over today.

(At the moment the MX for the envelope sender domain is outlook.com, and they reject a random claimed envelope sender address. Of course this spammer could be forging the domain of an innocent bystander, which is why I've decided not to mention it.)

The obvious speculation about where the gigantic line comes from is that the messages have extremely bloated HTML of some sort and it's all been crammed on to one line. I don't know what you do to get a 200,000 to 340,000 characters of HTML in an email message; maybe they're including images as inlined 'data' URLs.

spam/ContentTransferEncodingVeryBad written at 22:23:40; Add Comment

2023-03-28

An interesting yet ordinary consequence of ZFS using the ZIL

On the Fediverse, Alan Coopersmith recently shared this:

@bsmaalders @cks writing a temp file and renaming it also avoids the failure-to-truncate issues found in screenshot cropping tools recently (#aCropalypse), but as some folks at work recently discovered, you need to be sure to fsync() before the rename, or a failure at the wrong moment can leave you with a zero-length file instead of the old one as the directory metadata can get written before the file contents data on ZFS.

On the one hand, this is perfectly ordinary behavior for a modern filesystem; often renames are synchronous and durable, but if you create a file, write it, and then rename it to something else, you haven't insured that the data you wrote is on disk, just that the renaming is. On the other hand, as someone who's somewhat immersed in ZFS this initially felt surprising to me, because ZFS is one of the rare filesystems that enforces a strict temporal order on all IO operations in its core IO model of ZFS transaction groups.

How this works is that everything that happens in a ZFS filesystem goes into a transaction group (TXG). At any give time there's only one open TXG and TXGs commit in order, so if B is issued after A, either it's in the same TXG as A the two happen together or it's in a TXG after A and so A has already happened. In transaction groups, you can never have B happen but A not happen. In the TXG mental model of ZFS IO, this data loss is impossible, since the rename happened after the data write.

However, all of this strict TXG ordering goes out the window once you introduce the ZFS Intent Log (ZIL), because the ZIL's entire purpose is to persist selected operations to disk before they're committed as part of a transaction group. Renames and file creations always go in the ZIL (along with various other metadata operations), but file data only goes in the ZIL if you fsync() it (this is a slight simplification, and file data isn't necessarily directly in the ZIL).

So once the ZIL was in my mental model I could understand what had happened. In effect the presence of the ZIL had changed ZFS from a filesystem with very strong data ordering properties to one with more ordinary ones, and in such a more ordinary filesystem you do need to fsync() your newly written file data to make it durable.

(And under normal circumstances ZFS always has the ZIL, so I was engaging in a bit of skewed system programmer thinking.)

solaris/ZFSNaturalZILConsequence written at 22:48:43; Add Comment

2023-03-27

Moving from 'master' to 'main' in Git with local changes

One of the things that various open source Git repositories are doing is changing their main branch from being called 'master' to being called 'main'. As a consumer of their repository, this is generally an easy switch for me to deal with; some day, I will do a 'git pull', get a report that there's a new 'main' branch but there's no upstream 'master', and then I'll do 'git checkout main' and I'm all good. However, with some repositories I have my own local changes, which I handle through Git rebasing. Recently I had to go through a 'master' to 'main' switch on such a repository, so I'm writing down what I did for later use.

The short version is:

git checkout main
git cherry-pick origin/master..master

(This is similar to something I did before with Darktable.)

In general I could have done this with either 'git rebase' or 'git cherry-pick', and in theory according to my old take on rebasing versus cherry-picking the 'proper' answer might have been a rebase, since I was moving my local commits onto the new 'main' branch. However it was clear to me that I would probably have wanted to use the full three-argument form of 'git rebase', which is at least somewhat tricky to understand and to be sure I was doing right. Cherry-picking was much simpler; I could easily reason about what it was doing, and it left my old 'master' state alone in case.

(Switching from rebasing to cherry-picking is an experience I've had before.)

Now that I've written this I've realized that there was probably a third way, because at a mechanical level branches in git don't entirely exist. The upstream 'master' and 'main' branches cover the same commits (up until possibly the 'main' branch adds some on top). The only thing that says my local changes are on 'master' instead of 'main' is a branch head. In theory, what I could have done was just relabeling my current state as being on 'main' instead of 'master', and then possibly a 'git pull' to be current with the new 'main'.

(In the case of this particular repository, it was only a renaming of the main branch; upstream, both the old 'master' and the new 'main' are on the same commit.)

Since I just tried it on a copy of my local repository in question, the commands to do this are:

git branch --force main master
git checkout main
# get up to date:
git pull

I believe that you only need the pull if the upstream main is ahead of the old upstream master.

This feels more magical than the rebase or cherry-pick version, so I'm probably not likely to use it in the future unless there's some oddity about the situation. One potential reason would be if I've published my repository, I don't expect upstream development (just the main branch being renamed), and other people might have changes on top of my changes. At that point, a cherry-pick (or a rebase) would change the commit hashes of my changes, while simply sticking the 'main' branch label on to them doesn't, so people who have changes on top of my changes might have an easier time.

programming/GitMasterToMainWithLocalChanges written at 22:07:55; Add Comment

2023-03-26

My pragmatic shift from PS/2 keyboards and mice to USB ones

Today, I was reminded that at one point I had strong feelings on the issue of PS/2 versus USB for keyboards and mice, where I didn't like USB keyboards and my preferred mouse (or variety of them) was also a PS/2 mouse. However, these days I am entirely USB based for both keyboard and mouse and I don't particularly want to go back. What got me here is an assortment of issues, including the relentless march of time and 'progress' in the computing sense. Or, the short version, PS/2 is a de facto obsolete connector format.

Because PS/2 is de facto obsolete, increasingly many motherboards have one or zero PS/2 connectors; insisting on two or even one was clearly constraining my choices even back in 2015. I could have tried to keep using my 2015 favorite keyboard and mouse through PS/2 to USB converters, but I ran into issues with them that made me start to question the wisdom of that. In addition, new mice and keyboards were mostly or entirely USB (although there are probably mechanical keyboards that are PS/2 based). When I made my first foray into mechanical keyboards, it was with a USB mechanical keyboard, and that's continued to my current one. I also later changed my mouse to one that I'm now very fond of, and that again is USB based.

(This shift had happened by the time I put together my current work and home desktops; I didn't bother to specifically look for PS/2 ports on either of their motherboards.)

My pragmatic results are that USB has worked reliably for me here, more or less as expected; I don't do operating system development or, generally, shuffle USB things around or do weird USB stuff that could cause heartburn to Linux's USB stack. In addition, moving to USB has allowed me to switch to a keyboard and a mouse that I've come to believe are clearly nicer than my old keyboard and mouse. However fond I was of the BTC-5100C and its small size, I'm pretty sure that it had worse keyboard feel than my current mechanical keyboard. And the old plain three-button mice I used to use are clearly inferior to my current mouse.

(And the switch made selecting desktop motherboards much easier, since I no longer had to care about PS/2 port(s).)

USB is still more complex than PS/2 and is subject to fun issues from time to time. I mitigate some of these issues by connecting my keyboard directly to a desktop USB port, instead of going through a hub (and on my work machine, I think my mouse may also be directly connected).

Over time, the same shift has happened on our servers. Old servers used to have a PS/2 port, and we tended to use a PS/2 keyboard on them. Then we started getting some servers that were USB only, so we switched to USB keyboards in the machine room and lying around. Now I believe all of our servers are USB only, with no PS/2 left, and certainly I don't know where any remaining PS/2 keyboards we have would be. We may have thrown them out by now.

(Okay, I kept my PS/2 BTC-5100C keyboards and mice, so we still technically have some at work. I even have the PS/2 to USB converters for them, somewhere. If I really wanted to I could connect one up for a side by side comparison, although there's no real point; I'm not going back.)

All of this is unsurprising. Shifts in connector and interface technology have happened before over my time with computers. Disks have moved from SCSI to SATA (or IDE to SATA in some environments) and now toward NVMe; *-ROM drives moved from whatever it used to be to SATA and are now basically obsolete; AGP and PCI gave way to PCIe (with some digressions along the way). Keyboards and mice are different only in that we directly touch them and so I and others have strong opinions about what ones we want to use (and as a system administrator I get to reflexively worry about 'will it work even when there are problems').

tech/PS2ToUSBPragmaticJourney written at 22:13:23; Add Comment

2023-03-25

Apache 2.4's event MPM can require more workers than you'd expect

When we upgraded from Ubuntu 18.04 to Ubuntu 22.04, we moved away from the prefork MPM to the event MPM. A significant reason for this shift is that our primary public web server has wound up with a high traffic level from people downloading things, often relatively large datasets (for example). My intuition was that much of the traffic was from low-rate connections that were mostly idle on the server as they waited for slow remote networks. The event MPM is supposed to have various features to deal with this sort of usage (as well as connections idle in the HTTP 'keep alive' state, waiting to see if there's more traffic).

Soon after we changed over we found that we had to raise various event MPM limits, and since then we've raised them twice more. This includes the limit on the number of workers, Apache's MaxRequestsWorkers. Our Apache metrics say that when our MaxRequestsWorkers setting was 1000, we managed to hit that limit with busy workers. We're now up to 2,000 workers on that web server, which on the one hand feels absurd to me but on the other hand, 1,000 clearly wasn't enough.

One possible reason for this is that I may have misunderstood how frequently connections are idle or, to quote the event MPM documentation, "where the only remaining thing to do is send the data to the client". I had assumed (without testing) that once a connection was simply writing out data from a file to the client, it fell into this state, but possibly this is only for when Apache is buffering the last remaining data itself. Since the popular requests are multi-megabyte files, they'd spend most of their transfer with Apache still reading from the files. Certainly our captured metrics suggest that we don't see very many connections that Apache's status module reports as asynchronous connections that are writing things.

For our web server's current usage, these settings are okay. But they're unfortunately dangerous, because we allow people to run CGIs on this server, and the machine is unlikely to do well if we have even 1,000 CGIs running at the same time. In practice not many CGIs get run these days, so we're likely going to get away with it. Still, it makes me nervous and I wish we had a better solution.

(If it does become a problem I can think of some options, although they're generally terrible hacks.)

web/ApacheEventMPMManyWorkers written at 21:49:43; Add Comment

2023-03-24

Key rotation is not the same as key revocation (or invalidation)

As you may have heard, Github changed its RSA SSH host key today, because the private key had been exposed (apparently briefly) in a public Github repository. As a result of this, a lot of people got scary looking SSH warnings. One reaction people had to this was to note that Github could have avoided these warnings if it had used OpenSSH host key rotation to provide a second RSA host key in advance to prepare for the rotation. However, there is one little issue with this, which I alluded to on the Fediverse:

One thing Github is doing today is making people extremely aware that the prior Github RSA key is now invalid. This is a good thing with the key being presumed compromised, and one I don't think you can get with ordinary OpenSSH key rotation.

(I first saw the key invalidation issue pointed out in someone's Fediverse post, which I can't find now.)

This is something worth repeating: key rotation doesn't give you key revocation, and the two are different things. Key rotation gets people to accept and use a new key; key revocation gets them to not accept the old one. Of course if you revoke the current key you generally want people to rotate into using a new one, but you can want people to rotate into a new key without any particular revocation of the old one.

Broadly, key rotation by itself is a precautionary measure (much like periodic password changes) or a way to get people to upgrade to a better key (for example, to move from a 1024 or 2048 bit RSA key to a bigger one, or to switch key types). Key rotation doesn't actively force people to stop accepting the old key (although if the old key has an embedded expiry, it may fall out of validity on its own), it just enables them to also accept the new one so some day you can switch to only using the new one. If your key has actually been compromised, passively switching away from it isn't sufficient; you need to get people to actively stop accepting and using it. You have to assume that your old key in the hands of an attacker who can still use it, even if you don't, which lets the attacker target anyone who'll still accept the old key.

What Github has done isn't actual revocation (as Ewen McNeill noted); numbed by one alert, people could be coaxed to accept another alert and go back to the old key. Or an attacker could target people who haven't hit this yet (or haven't updated their keys) and feed them the old key, which they'd accept without warnings. But by making this a noisy event, Github has probably come as close to actual SSH key revocation as SSH allows.

(That SSH doesn't have anything better than this for key revocation of ordinary host keys is not really its fault. Github is using SSH in a situation that is really a better fit for the server authentication properties of public web TLS.)

PS: If an attacker can use the Github situation to get people to accept a second 'remote host identification has changed' key change for Github, they don't actually need Github's old private key; any new key will do.

tech/KeyRotationVersusKeyRevocation written at 22:49:29; Add Comment

2023-03-23

SSD block discard in practice on Linux systems

I'll put the summary up front. If you have SSD based systems installed with a reasonably modern Linux, it's pretty likely that they are quietly automatically discarding blocks from your SSDs on a regular basis. This is probably true even if you use software RAID mirrors (despite the potential problem RAID has with discarding blocks).

To start with, you can see if your SSDs are capable of discarding blocks with 'lsblk -dD'. If block discard is possible, it will report something like:

; lsblk -dD
NAME    DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda            0      512B       2G         0
sdb            0      512B       2G         0
sr0            0        0B       0B         0
zram0          0        4K       2T         0
nvme0n1        0      512B       2T         0
nvme1n1        0      512B       2T         0

But what about your software RAID arrays? You can check those too:

; lsblk -dD /dev/md*
NAME DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
md20        0      512B       2T         0
md25        0      512B       2G         0
md26        0      512B       2G         0
md31        0      512B       2T         0

If you guessed that md20 and md31 are on the NVMe disks and md25 and md26 are on the SATA SSDs, you're correct. All of these are mirrors.

On typical modern Linux systems, the actual ongoing trimming is done by fstrim, which is run from 'fstrim.service', which is trigger by 'fstrim.timer' on a regular basis; see 'systemctl list-timers' to see if it's enabled on your system. Typical setups have fstrim logging what it did into the systemd journal, so you can see what it did with 'journalctl -u fstrim.service' (possibly with -r to see the most recent runs first). Both Fedora and Ubuntu seem to enable fstrim by default; my Fedora desktops and our 20.04 and 22.04 Ubuntu servers all have it on.

Modern Linux kernels expose IO statistics about discards that have happened on each device (since the system was last rebooted). These are visible in /proc/diskstats, and are covered in Documentation/admin-guide/iostats.rst. Because these IO stats have been in diskstats for a while, things that parse and extract information from diskstats may also report them. In particular, this information is reported by the Prometheus host agent and can be used in a suitable Prometheus setup to see how much discarding your various devices are doing and have been doing (including for software RAID devices).

Not all filesystems support the block discarding related features that fstrim needs, although ext4 and btrfs both do (for btrfs, see their Trim/discard page). In particular, ZFS on Linux doesn't support them, and so the regular fstrim.timer won't TRIM your ZFS pools. Instead, there are various options for doing this and if you want you can do so more cautiously than fstrim normally lets you. Looking at IO statistics for discarding can confirm what filesystems do and don't support this, especially since discard information is available for partitions.

Knowing that our SSDs have been TRIM'd for some time (probably years) without any visible explosions makes me somewhat more confident about using some sort of ZFS TRIM'ing on my desktops (our servers don't need it right now for reasons outside the scope of this entry). I'm still not fully confident for ZFS because while the SSDs and regular filesystems may be well tested for TRIM, I'm not sure how much production use ZFS TRIM has had.

(I discovered this quiet, problem free TRIM'ing yesterday and then did some further investigation which let to discovering metrics and so on.)

linux/LinuxBlockDiscardInPractice written at 23:06:18; Add Comment

2023-03-22

The problem RAID faces with discarding blocks on SSDs

One of the things that's good for the performance of modern SSDs is explicitly discarding unused blocks so the SSD can erase flash space in advance. My impression is that modern SSDs support this fairly well these days and people consider it relatively trustworthy, and modern filesystems can discard unused blocks periodically (Linux has fstrim, which is sometimes enabled by default). However, in some environments there's a little fly in the ointment, and that's RAID (whether software or 'hardware').

The issue facing RAID is that in a RAID environment (other than RAID-0), by default there's some relationship between the contents of sector X on one disk and sector X on another disk. In RAID-1 the two sectors are supposed to be identical; in other RAID levels the sectors (along with sectors on other disks) are supposed to have one or more correct checksums. If you TRIM the same sector on two or more SSDs, the basic version of block discard support doesn't promise to give you any particular data, which means that the relationship between the data on different disks is now potentially gone.

(Modern SSDs support 'Deterministic Read After TRIM (DRAT)', cf, but this doesn't promise to return the same data on two different drives, you might get read errors instead, and this doesn't deal with RAID-N checksums.)

Some or perhaps many modern SSDs support 'Deterministic read ZEROs after TRIM' (variously called DZAT, RZAT, or DRZAT). A RAID-1 mirror on SSDs with reliable DZAT can TRIM sector X on all mirrors and be confident that its expected relationship between sectors on disks still holds. A RAID-N parity system might have more troubles here, but it can at least only have to (re)write the parity blocks for an all-zero set of data blocks; the data blocks themselves could be left TRIM'd.

(Probably a RAID-N system could also do this for SSDs supporting DRAT; it would TRIM the data and parity blocks, then re-read the data blocks, calculate the parity for whatever deterministic values it reads, and write the parity out.)

The other option I can think of is for the RAID system to keep track of what block ranges have been TRIM'd and so don't have consistent contents on the actual disks. Some higher end storage systems already support thin provisioning, which requires them to keep track of what user-visible blocks are valid; it's straightforward to use this for SSD block discarding as well. Otherwise the RAID system will require some sort of data structure to keep track of this, which will probably be new.

(Perhaps RAID systems have come up with other clever solutions to this problem.)

tech/RAIDSSDBlockDiscardProblem written at 22:27:23; Add Comment

(Previous 10 or go back to March 2023 at 2023/03/21)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.