Wandering Thoughts

2020-03-31

My home DSL link really is fast enough to make remote X acceptable

A few years ago I wrote about how my home internet link had gradually gotten fast enough that I could toy around with running even VMWare Workstation over remote X. At the time (and afterward) I thought that that was kind of nice in theory, but I never really tested how far this would go and how it would feel to significantly use remote X for real (even when I missed various aspects of remote X). Recently, world and local events have made for an extended period of working from home, which means that I now very much miss some aspects of my work X environment and have been strongly motivated to see if I can use them over remote X. Because I'm lazy, I've been doing all of this over basic SSH X forwarding (with compression turned on) instead of anything more advanced that would require more work on my part.

I was going to say that I started with things that are fundamentally text based, but that's not really true. Even X programs that render text are no longer 'text based' in the sense of sending only 'draw this character' requests to the server, because modern X fonts are rendered in the client and sent to the server as bitmaps. Given font anti-aliasing for LCD displays, they may not even be black and white bitmaps. Still, programs like exmh and sam only really do relatively simple graphics, and not necessarily very often. All of this runs well enough that I'm mostly happy to keep on using it instead of other options. Given past experiences I wasn't really surprised by this.

What I have recently been surprised with is running VMWare Workstation remotely from my office machine, because what I was doing (from remote) reached the point where I wanted to spin up a test virtual machine and we didn't build a virtual machine host designed for remote use before we left the office. Back several years ago in the original entry, I didn't try to seriously use VMWare Workstation to get real work done; now I have, and it works decently (certainly enough for me to be productive with it). It doesn't even seem to saturate my DSL link or suffer too much when other things are using the link.

Of course, running X remotely over a DSL link that's only medium fast doesn't measure up to running it over a 1G Ethernet network, much less the local machine. I can certainly feel the difference (mostly in latency and responsiveness). But it's much more usable than I might have expected, and I've had to change my work habits less than I feared.

(I'm not sure if using SSH's build in compression is a good idea in general these days, but on a quick experiment it appears to be drastically reducing the total data sent from the remote VMWare Workstation to my home machine.)

PS: There are more sophisticated ways of doing remote X than just 'ssh -X' that are said to perform better. If we keep on working remotely for long enough, I will probably wind up exploring some of them.

HomeInternetAcceptableX written at 21:45:26; Add Comment

2020-03-14

The two meanings of 'DNS over HTTPS' today

When I wrote about how sensible heuristics for when to use DNS over HTTPS can't work for us, in a comment Guus asked about us setting up a DNS over HTTPS services along side our existing resolver. Depending on your perspective, this is either an obvious good question or one with an obvious answer, and the thing is, neither of those perspectives are wrong. In common usage today, 'DNS over HTTPS' has become ambiguous; depending on context it can mean one of two things.

The first thing it means is DNS over HTTPS the protocol. In this usage, you can set up your own DoH server, set things to use it, and so on. Whether this is a sensible action depends on your threat profile and what clients want to do. My general feeling is that right now it mostly doesn't make sense, because there should not normally be any untrusted snooping on your networks happening between your clients and your local DNS servers. If clients really want to use DoH you could set it up anyway, but it's not really adding much security.

The second and more common thing it means is DNS over HTTPS as used in Firefox (and soon other browsers). This is not just DNS over HTTPS the protocol but DNS over HTTPS to specific public resolvers, with the choice of public resolvers out of the control of the people running the local network. This is the version of DNS over HTTPS that Firefox users in the US are getting now (if they accept Mozilla's offer) and that will presumably come to other parts of the world for at least Firefox (and other browsers are thinking about it too). You can't set up a DNS over HTTPS server that's useful for this version of DNS over HTTPS, because one security problem that Firefox's DNS over HTTPS environment is designed to deal with is that on the modern Internet, ISPs are one of your threats. If you use your ISP's resolver, it can log your DNS lookups and then use that information, so Firefox goes straight to trusted DoH servers and ignores yours.

As a practical matter, the 'DNS over HTTPS as used in Firefox' usage is the dominant one. I don't think very many people are setting up local DoH servers and I don't expect them to become popular until there's both a well supported protocol for automatically discovering or configuring them in local network clients and a solid benefit to having clients use DNS over HTTPS instead of plain internal DNS queries.

(Encrypted SNI may be one driver for this, since as currently implemented it requires using a DNS over HTTPS or DNS over TLS resolver. But it's not clear that organizations will care about ESNI for their own outgoing traffic.)

DNSOverHTTPSTwoMeanings written at 01:00:51; Add Comment

2020-03-12

TLS increasingly exists in three different worlds

I recently wrote about how browsers are probably running the TLS show now, and then recently realized that that is only somewhat true. In practice, I think that TLS now increasingly exists in at least three different worlds that are at least somewhat disconnected from each other, and what's true for one world may not be entirely true for the others.

The first world is web TLS, which is dominated by browsers. This is the familiar world of public HTTPS, with public Certificate Authorities, requirements for certificate transparency, and so on. The browsers increasingly are calling the shots here and they're pushing for things like short certificate lifetimes, aggressively moving away from old TLS versions, and so on. Due partly to the presence of Javascript in browsers, this world faces some unusual security problems; an attacker can plausibly cause someone else's client to make thousands or millions of connections to a server, for example, in an attempt to crack the session encryption and extract something useful from it.

The second is non-web public TLS, where TLS is used for protocols like IMAP, SMTP (with STARTTLS), and so on. This world still uses public CAs, but it has a lot more old clients and servers and is a lot slower to deprecate old TLS and SSL versions, move to shorter certificate lifetimes, and so on. At the same time it doesn't face some of the threats that web TLS does, as attackers have far less power to manipulate the behavior of victims in convenient ways. A victim IMAP client may reconnect repeatedly, but an attacker isn't likely to persuade it to uses carefully controlled variations of the connection.

(Non-web public TLS is going to get dragged along on short certificate lifetimes by web TLS, though.)

The third world is internal TLS, where TLS is used inside an organization or a service to encrypt connections and often to authenticate them (and sometimes it's used between organizations). Internal TLS frequently uses client certificates and usually doesn't use public CAs, and that's about all you can say about it in general; actual practices no doubt vary widely across people using it. The reason these practices can vary widely is that each separate use of internal TLS operates in a closed, captive environment where it doesn't really have to care what other people think.

There is overlap between these three worlds, as well as the differences that I sketched here. Everyone wants good connection security and for weak ciphers and protocol vulnerabilities to be weeded out. Web TLS and non-web public TLS both care a lot about Certificate Authorities being trustworthy, but web TLS has been driving the show on this. There are probably interests and positions shared only by non-web TLS and internal TLS, but I can't think of any right now.

TLSThreeWorlds written at 00:34:30; Add Comment

2020-02-22

Will common motherboards ever have very many NVMe drive slots?

Currently M.2 NVMe drives are the best drives you can get as far as performance goes, with no superior drive interconnect really on the horizon as far as I know. However, they have a practical drawback, which is that they're only available in a limited range of sizes (especially if you want cost effective sizes) and common garden variety desktop motherboards only have two or even one M.2 slots. My current home machine has two M.2 NVMe slots; my office workstation has only one and I had to get a PCIe M.2 adapter to get a mirrored pair of NVMe drives into it. An interesting question, especially for me, is if this is ever going to change and the number of NVMe drives you can connect to common desktop motherboards rises toward the level that SATA currently has (where you get at least four SATA ports on anything you really want to use).

Unfortunately I suspect that motherboards probably won't raise the number of M.2 NVMe slots they offer, even if NVMe becomes more popular and completely cost competitive with (good) SATA SSDs. The lesser reason for this is that there are only so many PCIe lanes to go around and decent M.2 NVMe slots need four per slot. However, CPU vendors can more or less raise the PCIe lane count if they want to and they already have higher counts on higher end CPUs. At this point I suspect that PCIe lane count on desktop CPUs is mostly a matter of market segmentation, like Intel's approach to ECC memory.

The bigger issue for significantly more M.2 slots on motherboards is probably the sheer space that they consume in the currently common setup of horizontal mounting on the motherboard. The common M.2 NVMe form factor is M.2 2280, which is 22 millimeters (2.2 cm, almost an inch) wide and 80 millimeters long (8 cm, just over 3 inches). Add in some extra length and a bit of extra width for the M.2 socket itself, and that's a reasonably decent chunk of motherboard space that you can't put any substantial electronics on (since they have to be short enough to not hit the M.2 NVMe, and they won't have much cooling underneath it). For scale, an x16 PCIe physical slot is apparently a bit over 80 mm long (and the PCIe card itself will generally extend further). You can get ATX motherboards with three M.2 slots (somewhat to my surprise), but I don't think there's any with four. The easiest way to fit four M.2 2280 slots into a motherboard is probably with an x16 PCIe slot and adapter card.

(People will sell you M.2 extender cables but I don't know if they're actually valid and proper under the PCIe specifications, even in theory, or how long they can be. In theory there's also U.2; in practice I doubt U.2 NVMe drives will make a comeback for various reasons. Any solution needs to be compatible with garden variety M.2 drives that can be sold in quantity and mostly mounted in normal motherboard M.2 slots.)

Whether this matters for most people is an open question (and an important one, since buyer demand will drive what desktop motherboard vendors do). SATA HDs are probably going continue to be the bulk storage medium of choice, and people who don't mirror their storage are probably only willing to get so many M.2 NVMe drives. And M.2 NVMe drive capacity will probably keep going up, which drives down the need for a bunch of M.2 drives. On the optimistic side, we're already up to top end gaming focused desktop motherboards with three M.2 slots, so that's probably going to keep on being available and maybe become more common in motherboards that are less high-end.

MotherboardNVMeMultiSlots written at 00:03:34; Add Comment

2020-02-09

Code dependencies and operational dependencies

Drew DeVault recently wrote Dependencies and maintainers, which I will summarize as both suggesting that dependencies should be minimized and being a call to become involved in your dependencies. I have various thoughts on this general issue (cf), but perhaps I have an unusual perspective as someone who is primarily a system administrator instead of a developer. As part of that perspective, I think it's useful to think about two sorts of dependencies, what I will call code dependencies and operational dependencies.

Code dependencies are the Python modules, Go packages (or modules these days), Rust crates, C libraries, Node packages, or whatever in your language that you choose to use or that get pulled in as indirect dependencies. You can keep track of at least your direct code dependencies (you're setting them up, after all), and hopefully you can trace through them to find indirect ones. If there are too many indirect dependencies and levels of indirection, this may be a sign (Drew DeVault would likely suggest that you take it as a bad one).

Operational dependencies are everything that you need to operate and even build your program and your system. If you have a web site, for example, your web server is at least an operation dependency. Everyone has an operational dependency on their operating system's kernel, and often on many other operating system components. People using interpreted languages (Ruby, Python, Node, etc) have an operational dependency on the language interpreter; people compiling programs have an operational dependency on GCC, clang/LLVM, the Rust compiler, the Go compiler, and so on. Operational dependencies have internal code dependencies, making you transitively dependent on them too. If you operate a web server that does HTTPS, whether Apache or nginx, it probably has a code dependency on some SSL library, most likely OpenSSL, making OpenSSL an indirect operational dependency for you.

Things can be both code dependencies and operational dependencies, because they're directly used by you and also used by other systems that you need. Some things are so commonly used and so much a part of the 'platform' that you'll likely consider them operational dependencies instead of code dependencies even if they're directly incorporated into your programs; the most significant example of this is your system's C library and, on Unix systems, the runtime dynamic linker.

(System administrators are often quite conscious of operational dependencies because much of our job is managing them. Other people can often assume that they are just there and work.)

CodeAndOperationalDependencies written at 02:43:01; Add Comment

2020-01-26

The real world is mutable (and consequences for system design)

Every so often I see people put together systems on the Internet that are designed to be immutable and permanent (most recently Go). I generally wince, and perhaps sigh to myself, because sooner or later there are going to be problems. The reality of life is that the real world is not immutable. I mean that at two levels. The first is that sometimes people make mistakes and publish things that they very strongly wish and need to change or retract. Pretending that they do not is ignoring reality. Beyond that, things in the real world are almost always mutable and removable because lawyers can show up on your doorstep with a court order to make them so, and the court generally doesn't care about what problems your choice of technology has created for you in complying. If the court says 'stop serving that', you had better do so (or have very good lawyers).

It's my view that designing systems without considering this creates two problems, one obvious and one not obvious. The obvious one is that on the day when the lawyers show up on your front door, you're going to have a problem; unless you enjoy the varied and generally unpleasant consequences of defying a court order, you're going to have to mutate your immutable thing (or perhaps shut it down entirely). If you're having to do this from a cold start, without any advance consideration of the issue, the result may be disruptive (and obviously shutting down entirely is disruptive, even if it's only temporary as you do a very hasty hack rewrite so that you can block certain things or whatever).

The subtle problem is that by creating an immutable system and then leaving it up to the courts to force you to mutate it, you've created a two-tier system. Your system actually supports deletions and perhaps modifications, but only for people who can afford expensive lawyers who can get those court orders that force you to comply. Everyone else is out of luck; for ordinary people, any mistakes they make are not fixable, unlike for the powerful.

(A related problem is that keeping your system as immutable as possible is also a privilege extended more and more to powerful operators of the service. Google can afford to pay expensive lawyers to object to proposed court orders calling for changes in their permanent proxy service; you probably can't.)

As a side note, there's also a moral dimension here, in that we know that people will make these mistakes and will do things that they shouldn't have, that they very much regret, and that sometimes expose them to serious consequences if not corrected (whether personal, professional, or for organizations). If people design a system without an escape hatch (other than what a court will force them to eventually provide), they're telling these people that their suffering is not important enough. Perhaps the designers want to say that. Perhaps they have strong enough reasons for it. But please don't pretend that there will never be bad consequences to real people from these design decisions.

PS: There's also the related but very relevant issue of abuse and malicious actors, leading to attacks such as the one that more or less took down the PGP Web of Trust. Immutability means that any such things that make it into the system past any defenses you have are a problem forever. And 'forever' can be a long time in Internet level systems.

RealWorldIsMutable written at 22:06:11; Add Comment

2020-01-19

Why a network connection becoming writable when it succeeds makes sense

When I talked about how Go deals with canceling network connection attempts, I mentioned that it's common for the underlying operating system to signal you that a TCP connection (or more generally a network connection) has been successfully made by letting it become writable. On the surface this sounds odd, and to some degree it is, but it also falls out of what the operating system knows about a network connection before and after it's made. Also, in practice there is a certain amount of history tied up in this particular interface.

If we start out thinking about being told about events, we can ask what events you would see when a TCP connection finishes the three way handshake and becomes established. The connection is now established (one event), and you can generally now send data to the remote end, but usually there's no data from the remote end to receive so you would not get an event for that. So we would expect a 'connection is established' event and a 'you can send data' event. If we want a more compact encoding of events, it's quite tempting to merge these two together into one event and say that a new TCP connection becoming writable is a sign that its three way handshake has now completed.

(And you certainly wouldn't expect to see a 'you can send data' event before the three way handshake finishes.)

The history is that a lot of the fundamental API of asynchronous network IO comes from BSD Unix and spread from there (even to non-Unix systems, for various reasons). BSD Unix did not use a more complex 'stream of events' API to communicate information from the kernel to your program; instead it used simple and easy to implement kernel APIs (because this was the early 1980s). The BSD Unix API was select(), which passes information back and forth using bitmaps; one bitmap for sending data, one bitmap for receiving data, and one bitmap for 'exceptions' (whatever they are). In this API, the simplest way for the kernel to tell programs that the three way handshake has finished is to set the relevant bit in the 'you can send data' bitmap. The kernel's got to set that bit anyway, and if it sets that bit and also sets a bit in the 'exceptions' bitmap it needs to do more work (and so will programs; in fact some of them will just rely on the writability signal, because it's simpler for them).

Once you're doing this for TCP connections, it generally makes sense for all connections regardless of type. There are likely to be very few stream connection types where it makes sense to signal that you can now send (more) data partway through the connection being established, and that's the only case where this use of signaling writability gets in the way.

ConnectingAndWritability written at 01:09:43; Add Comment

2020-01-06

How I move files between iOS devices and Unix machines (using SSH)

Suppose, not hypothetically, that you're a Unix person with some number of iOS devices, such as a phone and a tablet, and you wind up with files in one environment that you would like to move to or access from the other. On the iOS devices you may have photos and videos you want to move to Unix to deal with them with familiar tools, and on Unix you may have files that you edit or read or refer to and you'd like to do that on your portable devices too. There are a variety of ways of doing this, such as email and Nextcloud, but the way I've come around to is using SSH (specifically SFTP) through the Secure Shellfish iOS app.

Secure Shellfish's fundamental pitch is nicely covered by its tagline of 'SSH file transfers on iOS' and its slightly longer description of 'SSH and SFTP support in the iOS Files app', although the Files app is not the only way you can use it. Its narrow focus makes it pleasantly minimalistic and quite straightforward, and it works just as it says it does; it uses SFTP to let you transfer files between a Unix account (or anything that supports SFTP) and your iOS devices, and also to look at and modify in place Unix files from iOS, through Files-aware programs like Textastic. As far as (SSH) authentication goes, it supports both passwords and SSH keys (these days it will generate RSA keys and supports importing RSA, ECDSA, and ed25519 keys).

If the idea of theoretically allowing Secure Shellfish full access to your Unix account makes you a bit nervous, there are several things you can do. On machines that you fully control, you can set up a dedicated login that's used only for transferring things between your Unix machine and your iOS devices, so that they don't even have access to your regular account and its full set of files. Then, if you use SSH keys, you can set your .ssh/authorized_keys to force the Secure Shellfish key to always run the SFTP server instead of allowing access to an ordinary shell. For example:

command="/usr/libexec/openssh/sftp-server",restrict ssh-rsa [...]

(sftp-server has various command line flags that may be useful here for the cautious. As I found out the hard way, different systems have different paths to sftp-server, and you don't get good diagnostics from Secure Shellfish if you get it wrong. On at least some versions of OpenSSH, you can use the special command name 'internal-sftp' to force use of the built-in SFTP server, but then I don't think you can give it any command line flags.)

To avoid accidents, you can also configure an initial starting directory in Secure Shellfish itself and thereby restrict your normal view of the Unix account. This can also be convenient if you don't want to have to navigate through a hierarchy of directories to get to what you actually want; if you know you're only going to use a particular server you configure to work in some directory, you can just set that up in advace.

As I've found, there are two ways to transfer iOS things like photos to your Unix account with Secure Shellfish. In an iOS app such as Photos, you can either directly send what you want to transfer to Secure Shellfish in the strip of available apps (and then pick from there), or you can use 'Save to Files' and then pick Secure Shellfish and go from there. The advantage and drawback of directly picking Secure Shellfish from the app strip is that your file is transferred immediately and that you can't do anything more until the transfer finishes. If you 'save to files', your file is transferred somewhat asynchronously. As a result, if you want to immediately do something with your data on the Unix side and it's a large file, you probably want to use the app route; at least you can watch the upload progress and know immediately when it's done.

(Secure Shellfish has a free base version and a paid 'Pro' upgrade, but I honestly don't remember what's included in what. If it was free when I initially got it, I upgraded to the Pro version within a very short time because I wanted to support the author.)

PS: Secure Shellfish supports using jump (SSH) servers, but I haven't tested this and I suspect that it doesn't go well with restricting your Secure Shellfish SSH key to only doing SFTP.

IOSUnixFileTransfer written at 00:45:26; Add Comment

2019-12-18

PCIe slot bandwidth can change dynamically (and very rapidly)

When I added some NVMe drives to my office machine and started looking into its PCIe setup, I discovered that its Radeon graphics card seemed to be operating at 2.5 GT/s (PCIe 1.0) instead of 8 GT/s (PCIe 3.0). The last time around, I thought I had fixed this just by poking into the BIOS, but in a comment, Alex suggested that this was actually a power-saving measure and not necessarily done by the BIOS. I'll quote the comment in full because it summarizes things better than I can:

Your GPU was probably running at lower speeds as a power-saving measure. Lanes consume power, and higher speeds consume more power. The GPU driver is generally responsible for telling the card what speed (and lane width) to run at, but whether that works (or works well) with the Linux drivers is another question.

It turns out that Alex is right, and what I saw after going through the BIOS didn't quite mean what I thought it did.

To start with the summary, the PCIe bandwidth being used by my graphics card can vary very rapidly from 2.5 GT/s up to 8 GT/s and then back down again based on whether or not the graphics driver needs the card to do anything (or the aggregate Linux and X software stack as a whole, since I don't know where these decisions are being made). The most dramatic and interesting difference is between two apparently very similar ways of seeing if the Radeon's bandwidth is currently downgraded, either automatically scanning through lspci's output with 'lspci -vv | fgrep downgrade' or manually looking through it with 'lspci -vv | less'. When I used less, the Radeon normally showed up downgraded to 2.5 GT/s. When I used fgrep, other things before the Radeon showed up as downgraded but the Radeon never did; it was always at 8 GT/s.

(Some of those other things have been downgraded to 'x0' lanes, which I suspect means that they've been disabled as unused.)

What I think is happening here is that when I pipe lspci to less, lspci gets the Radeon's bandwidth before any output is written to the screen (less reads it all in a big gulp and then displays it), so at the time the graphics chain is inactive. When I use the fgrep pipe, some output is written to the screen before lspci gets to the Radeon and so the graphics chain lights up the Radeon's bandwidth to display things. What this suggests is that the graphics chain can and does vary the Radeon's PCIe bandwidth quite rapidly. Another interesting case is that running the venerable glxgears doesn't bring the PCIe bandwidth up from 2.5 GT/s, but running GpuTest's 'fur' test does (it goes to 8 GT/s as you might expect).

(It turns out that nVidia's Linux drivers also do this.)

Of course all of this may make seeing whether you're getting full PCIe bandwidth a little bit interesting. It's clearly not enough to just look at your system, even when it's moderately active (I have several X programs that update once a second); you really need to put it under some approximation of full load and then check. So far I've only seen this happen with graphics cards, but who knows what's next (NVMe drives could be one candidate to drop their bandwidth to save power and thus reduce heat).

PCIeVaryingBandwidth written at 00:38:31; Add Comment

2019-12-07

Some important things about how PCIe works out involve BIOS magic

I'll start with my remark on Mastodon:

I still don't know why my Radeon graphics card and the PCIe bridge it's behind dropped down from PCIe 3.0 all the way to PCIe 1.0 bandwidth, but going into the BIOS and wandering around appears to have magically fixed it, so I'll take that.

PCIe: this generation's SCSI.

When I added some NVMe drives to my office machine and ran into issues, I discovered that the Radeon graphics card on my office machine was operating at 2.5 GT/s instead of 8 GT/s, which is to say PCIe 1.0 data rates instead of PCIe 3.0 ones (which is what it should be operating at). At the end of the last installment I speculated that I had accidentally set something in the BIOS that told it to limit that PCIe slot to PCIe 1.0, because that's actually something you can do through BIOS settings (on some BIOSes). I went through the machine's BIOS today and found nothing that would explain this, and in fact it doesn't seem to have any real settings for PCIe slot bandwidth. However, when I rebooted the machine after searching through the BIOS, I discovered that my Radeon and the PCIe bridge it's behind were magically now at PCIe 3.0's 8 GT/s.

I already knew that PCIe device enumeration involved a bunch of actions and decisions by the BIOS. I believe that the BIOS is also deeply involved in deciding how many PCIe lanes are assigned to particular slots (although there are physical constraints there too). Now it's pretty clear that your BIOS also has its fingers in decisions about what PCIe transfer rate gets used. As far as I know, all of these decisions happen before your machine's operating system comes into the picture; it mostly has to accept whatever the BIOS set up, for good or bad. Modern BIOSes are large opaque black boxes of software, and like all such black boxes they can have both bugs and mysterious behavior.

(Even when their PCIe setup behavior isn't a bug and is in fact necessary, they don't explain themselves, either to you or to the operating system so that your OS can log problems and inefficiencies.)

How do you know that your system is operating in a good PCIe state instead of one where PCIe cards and onboard controllers are being limited for some reason? Well, you probably don't, not unless you go and look carefully (and understand a reasonable amount about PCIe). If you're lucky you may detect this through side effects, such as increased NVMe latency or lower than expected GPU performance (if you know what GPU performance to expect in your particular environment). Such is the nature of magic.

PCIeAndBIOSDecisions written at 01:26:03; Add Comment

(Previous 10 or go back to December 2019 at 2019/12/04)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.