Wandering Thoughts

2021-10-24

Companies and their stewardship of open source projects

One of the articles of the time interval is Dustin Moris Gorski's Can we trust Microsoft with Open Source?, written in the wake of what I will just describe as Microsoft shenanigans around .NET. In the wake of reading that and having some general thoughts, I tweeted:

Can we trust Microsoft with open source? No, of course not. Nor can we trust Google, Apple, or Facebook. Any appearance of open source friendliness is tactical; it's not and has never been a deep seated cultural value.

(Applications to Github are left for Halloween scares.)

All of these companies (and others) have behaved in less than desirable ways around open source projects that they have either founded, inherited, or become significant contributors to (whether that contribution is developer resources, money, or what have you). And these are the good companies, the ones where the issue is even worth talking about.

(Oracle closing off OpenSolaris is very well known, but no one expects anything from Oracle except rapacious pricing.)

As a good first approximation, we cannot expect any company to be a genuinely good steward of open source projects. Unless open source is extremely deep in the company's culture, their support of open source is not a core imperative, it's a cold blooded business decision. This doesn't necessarily mean that they are exploiting open source, but it does mean that their support for their open source projects will only continue as long as they consider it not harmful and not too expensive for the benefit that both the project and being good stewards bring them. As we saw with .NET, when this is no longer true, things happen.

(Microsoft backed down this time but that was because their calculus of costs and benefits changed, not because they had a change of heart. The future course of Microsoft's stewardship of .NET is now clear, if it wasn't already.)

I do think that large companies can make what they feel is a good faith decision today to be a good steward to some open source project. Companies are made up of people and people can have good intentions and take actions with them. But companies also respond to incentives (and indeed are forced to), and over the long run those incentives point in the wrong direction. By default, a company is a remorseless machine that will crush any significant obstacle in its path.

CompaniesAndOSStewardship written at 22:58:17; Add Comment

2021-10-19

VCS history versus large open source development

I recently read Fossil's Rebase Considered Harmful (via), which is another rerun of the great rebase versus everything else debate. This time around, one of the things that occurred to me is that rebasing and an array of similar things allow maintainers of large, public open source repositories to draw a clean line between how people develop changes in private and what appears in the immutable public history of the project. Any open source project can benefit from clean public history, partly because clean history makes it easy to use bisection to locate bugs, but a large project especially benefits because it has so many contributors of varying skill levels and practices.

(In addition, consumers of public open source repositories often already see a linear view of the project's code history.)

Another aspect of using rebasing and other things that erase history (such as emailed patch series) is that they free people to develop changes in whatever style of VCS usage they find most comfortable and useful. You can set your editor to make a commit every time you save a file, and no one else has to care in the way they very much would if you proposed to merge the entire sequence intact into a large, public open source repository. The more contributors you have (and the more disparate they are), the more potentially useful this is.

Of course, there's a continuum, both between projects and in general. It's undeniably sometimes useful to know how a change was developed over time, for various reasons. It can also be useful to know how a change has flowed through various public versions of the code. The Linux kernel famously has a whole collection of trees that changes can wind up in before they get pulled into the mainline, and when this is done the changes often continue to carry their history of trees. Presumably this is useful to Linus Torvalds and other kernel developers.

One way to put this is that as an open source project grows larger and larger, I think that it makes less and less sense to try to represent almost everything that happens to the project in its VCS history. VCS history is only one way to capture and handle the entire history of the project; using it for everything has the same sort of broad problems that using any single thing for everything has. Perhaps the larger your project is, the more you should be explicitly asking what your VCS history is for and how you want it to be used (and to be useful).

VCSHistoryVsLargeOpenSource written at 23:24:48; Add Comment

2021-10-13

Web browsers drive what Certificate Authority root certificates are accepted

One reaction to my entry on how Certificate Transparency logs let us assess CAs is to note that this only applies for TLS certificates that are sent to CT logs. As far as I can tell, the CA/Browser Forum baseline requirements (version 1.8.0 right now) don't seem to require that a CA do this, so it's not absolutely mandatory. It's only required if you want browsers to accept your TLS certificates, or if a CA promised to do this in their documentation of their practices.

(Not following their own published rules is what got Let's Encrypt in trouble when they issued TLS certificates that were valid for one second longer than expected. The TLS certificates were otherwise in conformance with the Baseline Requirements, and part of LE's fix to the issue was to change what they said about their practices.)

Thus in theory you could have a non-browser Certificate Authority that issued TLS certificates without logging them to CT systems. One perfectly valid question is whether we care about such a CA for TLS certificate usage questions, since it's clearly a relatively niche usage (and by definition, browsers changing what they accept doesn't affect its certificates). But another practical question is how its root certificates would wind up widely available in trust stores. The reality of today is web browsers are the dominant source of CA root certificate trust stores, and they run their root stores primarily for themselves (and for web TLS in general).

There are four major web browsers left; Chrome, Safari, Microsoft Edge, and Firefox. The root certificate trust stores on macOS, iOS, and Windows are almost certainly strongly influenced by their respective browsers; it seems unlikely that either Microsoft or Apple would accept a new CA root certificate into them if it wasn't intended for web usage and wasn't going to routinely log its certificates to CT logs. Almost all Linux systems use Mozilla's CA roots (see for example Fedora) as their trust stores, and it seems relatively unlikely that Mozilla would accept a non-web TLS CA into that store. This leaves Android, which I believe basically uses Chrome's certificate store these days; since Chrome is a big driver of Certificate Transparency, I suspect Google wouldn't be any more enthused about a non-CT Certificate Authority.

The reality of life is that maintaining a good, well curated list of CA root certificates is a lot of work and doing it well requires you to have heft and influence. Mozilla (the least influential of the remaining four) can still get Certificate Authorities to pick up the phone and admit to problems; Red Hat, Debian, or FreeBSD might be another matter. In practice, this implies that maintaining your own root CA list means either rejecting some CAs that (eg) Mozilla considers still acceptable or accepting CAs that can't satisfy Mozilla or aren't interested in doing so (and hoping that they're still trustworthy).

As a corollary, I don't think there are very many organizations that could effectively create and maintain an independent CA root trust store. I believe that to do this well, you must be able to threaten CAs with being cut off from something you are the gatekeeper for, and there are very few organizations that are big enough gatekeepers for TLS to make this an effective threat. Browsers are the largest gatekeepers (that's why they're driving TLS now), and so I don't believe it's an accident that they've wound up maintaining effective CA root trust stores.

TLSBrowsersDriveRootStores written at 22:43:32; Add Comment

2021-10-12

Reasons to limit your stack size even in non-threaded environments

One reaction to learning that 4BSD is where Unix started to have a stack size limit is to ask why you would bother with a stack size limit at all in an environment without threads (where a process will thus only ever have one stack). There are a number of reasons that operating systems have generally done this, and probably why it starts in Unix in the 4BSD line, which ran on 32-bit VAX systems instead of the 16-bit PDP-11s that V7 did.

In a modern operating system with the ability to map things into your process's virtual memory, one reason to limit the size of the process's main stack is to create room for these mappings in your virtual memory address space. Potential future mappings have to go somewhere and that means address space has to be left open for them, which requires limiting the address space that's reserved for the stack (even if it's a very large limit, as it could be on 64-bit systems). Since it's more common to have large or gigantic mappings than it is to have a large or gigantic stack, leaving most of the space for mappings makes sense (by default, at least).

(In an environment with threads, thread stacks take up some of this virtual address space, and similarly need their own limits.)

But 4BSD was an operating system without threads that had no mmap() to map memory into your process's address space. All there was in your process's memory address space that could grow was the heap and the stack (and with only two things, they could just be let grow toward each other until they met). Yet 4BSD still found it useful to add support for limiting the stack size.

The simple reason to limit stack size is that otherwise, a program with an accidental infinite recursion (or in general a huge stack space usage) can easily exhaust all of your RAM. Stack space is special for two reasons. First, it's easy to accidentally use a lot of it through means like deep recursion. Second, when you do use stack space, it's almost always written to (even if just for function return addresses and saved registers), so it has to really be there (either in RAM or in swap space). The combination makes recursion a great way to allocate and dirty a lot of RAM quite fast.

When a single process's virtual address space that would theoretically be available for its stack is even close to the amount of RAM in your entire machine, much less greater than it (as it was for 32-bit machines for a long time), it's suddenly quite sensible to limit the stack space usage. Otherwise you're one easy accident away from being entirely out of RAM.

StackSizeLimitWhy written at 23:51:03; Add Comment

2021-10-10

TLS Certificate Transparency logs let us assess Certificate Authorities

I was recently reading Scott Helme's The Complexities of Chain Building and CA Infrastructure (it's part of a series from last year on the impending doom of expiring root CAs, which was linked to by Helme's Let's Encrypt Root Expiration - Post-Mortem). In it Helme mentioned something that previously hadn't consciously struck me about the modern Certificate Transparency environment. I'll just quote Helme here:

Who are the biggest CAs out there?

A few years ago that was probably a really hard question to answer. You might have to ask every CA what their issuance volume was and then compare them all to each other, hoping they weren't presenting figures in creative ways. Today though, we have a very easy way of determining this, Certificate Transparency. [...]

Simplifying slightly, all significant Certificate Authorities have had to log the TLS certificate they issue into Certificate Transparency logs for some time. Chrome has required this since May 2018, and since all fully valid TLS certificates have had a maximum validity of slightly over a year for a while, all still valid TLS certificates had better be in Certificate Transparency logs. As Scott Helme says, this means that we can count CA activity by looking at CT logs for what TLS certificates they've issued. We can do this for all of their TLS certificates, or just those with some characteristics (such as being in a specific TLD).

(There are some complexities in practice, but they can be solved with work.)

This matters for more than just a CA popularity count. One of the eternal arguments around either changing the rules for TLS certificates and CAs, or dealing with an issue with a CA, is how many people and TLS certificates will be affected. In the past traditionally there were all sorts of arguments and back and forth numbers and so on (from browsers, from CAs, etc). Today, for many questions we can go out and measure through the CT logs to count at least how many TLS certificates would be affected. How many TLS certificates would be affected is not the same thing as how much traffic or how many people would be affected, of course. But it's a start, which is more than we used to have to work with in the open.

PS: One use for this that I've already seen in past incidents is for third parties to check a CA's claims of how many of their TLS certificates have some mistake. Generally the answer is 'more than the CA initially reported', although probably my sample is biased because if people just confirm the CA's numbers they may not say anything.

TLSCTLetsUsMeasureCAUsage written at 23:26:27; Add Comment

2021-10-04

The "why" problem with on-host (host-based) firewalls on your machines

I somewhat recently read j. b. crawford's host firewalls, which as I read it puts forward a core thesis:

The great thing about a host firewall, the thing that really makes it a powerful tool that can do things that your Third-Generation Smart Firewall in the network rack can't, is something of a secret weapon: a host firewall can make decisions based on not just the packet but the process that sent or will receive it.

In the old days, this was to spot and deal with malware, but today, in theory, we could use this to deal with all of the things that want to phone home to snoop on us. Unfortunately, I believe there is a problem with this nice vision, what I will call the problem of "why".

If we're asked to decide if a program should be allowed to make a network connection, often one of the things we care about is why this connection is being done, not just what is trying to connect to where. Sometimes we don't need to know why, because what and where is sufficiently good or bad that it's clear (if your Twitter client is trying to connect to api.twitter.com, or some random program is trying to connect to 'sketchy-malware.com'), but in many cases it's a lot less clear. Is your video conferencing client making a call to Facebook because it's sending telemetry, or is it some side effect of their 'log in with Facebook' option?

(And this is before you start looking at how many connections are actually being made to opaque hostnames on CDNs. I tcpdump my outgoing network traffic every so often and it can be startling. There's also looking at about:networking in Firefox, even after you're using an adblocker.)

You could introduce host APIs that ask programs to declare the purpose of their connections and HTTP requests and so on, but you can cynically guess what would likely happen next. Some programs and code would be honest, but malware and various dubious programs and code would lie outright or at least bend the truth a lot. The information wouldn't be trustworthy enough, or at least you would be down to much like the current situation where your first decision would be how much you trust the program itself.

(There is also the related issue that programs could simply refuse to work entirely if you didn't let their telemetry phone home. But let's assume that they couldn't get away with this for one reason or another, including that they didn't want the bad publicity from failing entirely when their telemetry provider was down.)

A possible counter-argument (and a nice future world) would be that very few programs actively need to talk to many different companies as part of their normal operations. So we should expect or at least want that our video conferencing program entirely talks to the domain of its company and so on. In a world where who talks to what is more visible, in theory there could be social pressure to do this just to make your program more tractable for people to deal with. I don't think this is terribly likely, but the reasons for that need to go into another entry.

HostFirewallsWhyProblem written at 21:39:33; Add Comment

2021-10-03

Modern TLS has no place left for old things, especially clients

The TLS news of the recent time interval is the expiration of Let's Encrypt's R3 intermediate certificate and then the DST Root CA X3 certificate (for background, see Let's Encrypt's blog entry or Scott Helme). There were a variety of issues that came up (ZDNet, Scott Helme), but one common thread across many of them is that they involved old things. Old operating systems (such as old versions of macOS), old code, old middleware interceptor boxes, and so on. Although the specific details are always surprising, the general trend should not be, because it's been clear for some time that modern TLS is unfriendly to old things.

(Before this it's been the turn of old, historical browsers and old web servers.)

The modern TLS world is full of changes. Old root certificates are expiring and new ones are being introduced to replace them. Old code for certificate validation was never exposed to multiple chains, some with now invalid certificates, and either doesn't implement handling for it or has bugs in that code. Old TLS ciphers and versions are being deprecated and new ones introduced, as the TLS world moves to TLS 1.3 now and some version in the future. Not only does nothing stand still, with new things being added, but the old things don't keep working; they break or get turned off.

Some of this will be better in the future. For example, since it's happened already and will again, actively maintained TLS client code will increasingly deal properly with multiple certificate chains where some of them are expired or otherwise invalid. TLS 1.3 has some mechanisms to force client and server code to better cope with strange new things (such as seeing TLS extensions offered that you don't know about), and so we can expect fewer explosions in the future in clients, servers, and middleware systems. But other things can't be made correct now and then left alone for years. The set of root certificates you need is going to change, and someday there will be a TLS 1.4 that will become required. Nothing will help old things then.

(And probably we will discover new bugs and issues in old TLS code when other changes happen in the future, since we always have so far.)

Regardless of what one things about this situation with modern TLS, it exists (as demonstrated recently in the Let's Encrypt related issues). TLS things that are old today are going to be less and less functional over time; TLS things that are current now but stop being updated will also be less functional over time, but it will take longer for it to really happen. And there's no real prospect of this changing any time soon.

(Some of this is ideological on the part of the people involved in TLS development. They feel strongly that TLS was frozen for too long, to the detriment of its security, and that this should not be allowed to happen in the future.)

TLSNoPlaceForOldThings written at 22:16:11; Add Comment

2021-09-18

One major obstacle to unifying the two types of package managers

One common reaction on lobste.rs to my entry on how there are two types of package managers was to hope that the two types are unified somehow, or that people can work toward unifying them. Unfortunately, my view is that this currently has a major technical obstacle that we don't have good solutions for, which is the handling of multiple versions of dependencies.

A major difference between what I called program managers (such as Debian's apt) and module managers (such as Python's Pip) is their handling or non-handling of multiple versions of dependencies. Program managers are built with the general assumption of a single (global) version of each dependency that will be used by everything that uses it, while module managers allow each top level entity you use them on (program, software module, etc) to have different versions of its dependencies.

You can imagine a system where a module manager (like pip) hooks into a program manager to install a package globally, or a program manager (like apt) automatically also installs packages from a language source like PyPI. But any simple system like this goes off the rails the moment you have two versions of the same thing that you want to install globally; there's no good way to do it. Ultimately this is because we've made the historical decision in operating systems and language environments that we shouldn't consider version numbers.

In Unix, there is only one thing that can be /usr/include/stdio.h, and only one thing that can be a particular major version of a shared library. In a language like Python, there can be only one thing that is what you get when you do 'import package'. If two Python programs are working in the same environment, they can't do 'import package' and get different versions of the module. This versionless view of various sorts of namespaces (header files, shared libraries, Python modules, etc) is convenient and humane (no one wants to do 'import package(version=....)'), but it makes it hard to support multiple versions.

The state of the art to support multiple "global" versions of the same thing is messy and complex, and as a result isn't widely used. With no system support for this sort of thing, language package managers have done the natural thing and rolled their own approaches to having different environments for different projects so they can have different versions of dependencies. For example, Python uses virtual environments, while Rust and Go gather dependencies in their build systems and statically link programs by default. And to be clear here, modern languages don't do this to be obstinate, they do it because attempting to have a single global environment has been repeatedly recognized as a mistake in practice (just look at Go's trajectory here for one painful example).

(At the operating system level, often people punt and use containers.)

To have a chance of unifying program managers and module managers, we would have to come up with an appealing, usable, humane solution to this problem. This solution somehow has to work well with existing systems, practices, and languages, rather than assuming changes to practices and language implementations, since such changes are unlikely (as always, social problems matter). At the very least, this seems like a tall order.

PackageManagersTwoTypesII written at 23:39:37; Add Comment

2021-09-14

There are (at least) two types of package managers

These days, it seems that everything has a package manager (or package management system). Linux distributions and other Unixes have long had them (Debian apt, Fedora DNF, FreeBSD ports, etc), as do languages; Rust has Cargo, NPM is for node.js, and Perl has the famous CPAN. However, over dealing with a number of them I have come to feel that there are at least two rather different sorts of things being lumped together under the name "package managers". I don't have good names for them, so for this entry I'll call them program managers and module managers.

Program managers are what Linuxes and other Unixes have. Their job is to install programs (or packages more generally) and their dependencies (almost always globally), and keep everything up to date. In general, programs and packages within a single distribution version all depend on the same versions of things; if two programs depend on two different versions of something else, either one program can't be packaged for the distribution or there needs to be a second package of the something else, so both versions can be (globally) installed and used at the same time. While program managers often theoretically allow packages to express relatively complex dependency constraints (eg 'versions X to Y of this other package, except version Z'), this power is rarely used in practice because the entire collection is expected to be coherent.

Module managers are what languages have. Their job is to manage the dependencies of various different pieces of code, automatically determining what versions of additional modules can satisfy all of the constraints of your code, the modules your code uses, and so on (and then fetch, install, and update them). There is no idea of a "distribution version" (and thus no idea of required package versions being the same within one); instead there is a cloud of various versions of various packages, with a complex interlinked network of dependencies and version requirements. Module managers support relatively complex dependency constraints and these constraints are frequently used, partly because modules get updated at any time without promises of retaining backward compatibility in their API.

(Modules may signal API breaks with semantic versioning, but they don't promise not to make them at all. If your code says 'I accept the latest version of module X, whatever it is' and module X releases a new major version with a new API, this is socially acceptable on the part of module X and the result to your code is your fault.)

Module managers can operate in a global mode, but this is not really natural to them. The natural mode for modern module managers is to be applied to an individual top-level entity (your code, a module, a program, etc) to gather its requirements. It's expected that there are many top level entities on the system and not all of them can use the same version of any particular dependency; package version management is per top level entity, not global.

The package repository used by a program manager doesn't necessarily keep older versions of packages around (within a single distribution version), because they're both unnecessary (all other packages use the latest version) and undesirable (they've been superseded by an updated version). The package repository used by a module manager has to keep around essentially all versions of all modules ever published, because someone out there might be requiring that version specifically.

A program manager and its backend is almost always implicitly a closed universe, where the people operating it only consider the needs and dependencies of the packages it contains. If you have your own packages, you're on your own to keep them up to date as the program manager's packages change versions. A module manager and its backend are explicitly an open universe; it's expected that you have your own outside code that requires packages from the module manager in a way that's invisible and unpredictable to the module manager.

(People are often hostile to the local module manager client reporting very much information about what they're using it for to the module system's operators. Many people only want to expose what packages they actually fetch, and even then they may hide this with local caches and other mechanisms.)

PackageManagersTwoTypes written at 23:46:30; Add Comment

2021-09-13

Why I'm mostly not a fan of coloured text (in terminals or elsewhere)

I recently read someone who was unhappy that in this day and age, a Linux distribution specifically chose not to enable text colours in its default shell dotfiles (obligatory source). They have a point about the general situation, but also I disagree with them in practice.

On the one hand, the hand of theory, it is 2021. Our environments have been capable of coloured text for a long time (even if some people chose to turn it off), but here we often are, not using that capability. In many ways the default text environment is still single colour, with use of colours as the exception instead of the normality. In one sense, we really should have good use of colour in text by now.

On the other hand, the hand of practice, I'm glad that colour isn't used much because much use of colour in text is terrible (along with use of other methods of text emphasis). One technical reason for this is that many colour schemes for text assume a single specific foreground and background colour but don't (and often can't) force that, and wind up looking terrible in other environments. I run into this relatively frequently because I more or less require black text on white for readability, while many people prefer white text on black (what is often called "dark mode" these days).

A broader reason is that most colour schemes are not designed with a focus on contrast, readability, and communication (I think they're often not systematically designed at all). Instead they are all too often a combination of what looks good and matches the tastes of their creators, mingled with what has become traditional. This is colour for colour's sake, not colour for readability, information content, or clear communication.

(Even when some consideration has been put in for what the colours will communicate or emphasize, it often contains embedded value judgments, such as showing code comments in a colour that de-emphasizes them.)

There are probably text colour schemes out there that have been designed with a careful focus on contrast, readability, and HCI in general (and an awareness of the various sorts of colour blindness). But even in 2021, those colour schemes are a relative rarity. In practice, most colour schemes are various forms of fruit salad.

(This lack of careful design is not surprising. HCI-based design is hard work that requires uncommon skills, and also dedication and resources for things like user testing.)

I would probably like good colour schemes if they were common. Unfortunately, all too often my choices are either bad colour schemes or monochrome, and so I vastly prefer monochrome for the obvious reasons. In monochrome, all the text may blend together but at least I can read it.

TextColoursWhyNot written at 00:24:34; Add Comment

(Previous 10 or go back to September 2021 at 2021/09/09)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.