Wandering Thoughts

2024-06-06

Web applications should support being used behind a reverse proxy

I recently wrote about the power of using external authentication in a web application. The short version is that this lets you support any authentication system that someone can put together with a front end web server, with little to no work on your part (it also means that the security of that authentication code is not your problem). However, supporting this in your web application does have one important requirement, which is that you have to support being run behind a front end web server, which normally means having the front end server acting as a reverse proxy.

Proper support for being run behind a reverse proxy requires a certain amount of additional work and features; for example, you need to support a distinction between internal URLs and external URLs (and sometimes things can get confusing). I understand that it might be tempting to skip doing this work, but when web applications do that and insist on being run directly as a stand alone web server, they wind up with a number of issues. For one obvious case, when you run directly all of the authentication support has to be implemented by you, along with all of the authorization features that people will keep asking you for. Another case is that people will want you to do HTTPS, but you won't easily and automatically integrate with Let's Encrypt or other ACME based TLS certificate issuing and renewal systems.

(Let's set aside the issue of how good your TLS support will be as compared to a dedicated web server that has an active security team that worries about TLS issues and best practices. In general, sitting behind a reverse proxy removes the need to worry about a lot of HTTP and HTTPS issues, because you can count on a competent front end web server to deal with them for you.)

It used to be the case that a lot of web applications didn't support being run behind a reverse proxy (although a certain amount of that was PHP based applications that wanted to be run directly in the context of your main web server). My impression is that it's more common to support it these days, partly because various programming environments and frameworks make it easier to directly expose things over HTTP instead of anything else (HTTP has become the universal default protocol). However, even relatively recently I've seen applications where their support for reverse proxies was partial; you could run them behind one but not everything would necessarily work, or it could require additional things like HTML rewriting (although Prometheus Blackbox has added proper support for being behind a reverse proxy since I wrote that entry in 2018).

(I'd go so far as to suggest that most web applications that speak HTTP as their communication protocol should be designed to be used behind a reverse proxy when in 'production'. Direct HTTP should be considered to be for development setups, or maybe purely internal and trusted production usage. But this is partly my system administrator bias showing.)

WebAppsShouldSupportReverseProxy written at 22:40:29; Add Comment

2024-05-20

The power of using external authentication information in a web application

Recently, a colleague at work asked me if we were using the university's central authentication system to authenticate access to our Grafana server. I couldn't give them a direct answer because we use Apache HTTP Basic Authentication with a local password file, but I could give them a pointer. Grafana has the useful property that it can be configured to take authentication information from a reverse proxy through a HTTP header field, and you can set up Apache with Shibboleth authentication so that it uses the institutional authentication system (with appropriate work by several parties).

(Grafana also supports generic OAuth2 authentication, but I don't know if the university provides an OAuth2 (or OIDC) interface to our central authentication system.)

This approach of outsourcing all authentication decisions to some front-end system is quite useful to enable people to solve their specific authentication challenges for your software. The front-end doesn't particularly have to be Apache; as Grafana shows, you can have the front-end stuff some information in an additional HTTP header and rely on that. Apache is useful because people have written all sorts of authentication modules for it, but there are probably other web servers with similar capabilities.

You might think that your web application relying on OAuth2 would get you similar capabilities, but I believe there are two potential limitations. First, as the Grafana documentation on generic OAuth2 authentication demonstrates, this can make access control complicated. Either the system administrator has to stand up an OAuth2 provider that only authenticates some people and perhaps modifies the OAuth2 information returned from an upstream source, or your software needs to support enough features that you can exclude (or limit) certain people. Second, it doesn't necessarily enable a "single sign on" experience by default, because the OAuth2 provider may require people to approve passing information to your web application before it gives you the necessary information.

(Now that I look, I see that I can set my discount OIDC IdP to not require this approval, if I want to. In a production OIDC environment where the only applications are already trusted internal ones, we should set things this way, which I believe would give us a transparent SSO environment for purely OIDC/OAuth2 stuff.)

The drawback of entirely outsourcing authentication to a front-end system is that your web application doesn't provide busy system administrators with a standalone 'all in one' solution where they can plug in some configuration settings and be done; instead, they need a front-end as well and to configure authentication in it. I suspect that this is why Grafana supports other authentication methods (including OAuth2) along with delegating authentication to a front-end system. On the other hand, a web application that's purely for your own internal use could rely entirely on the front-end without worrying about authentication at all (this is effectively what we do with our use of Apache's HTTP Basic Authentication support).

My view is that your (general purpose) web application will never be able to support all of the unusual authentication systems that people will want; it's just too much code to write and maintain. Allowing a front-end to handle authentication (and with it possibly authorization, or at least primary authorization) makes that not your problem. You can implement a few common and highly requested stand alone authentication and authorization mechanisms (if you want to) and then push everything else away with 'do it in a front-end'.

PS: I'm not sure if what Grafana supports is actually OAuth2 or if it goes all the way to require OIDC, which I believe is effectively a superset of OAuth2; some of its documentation on this certainly seems to actually be working with OIDC providers. When past me wrote about mapping out my understanding of web based SSO systems, I neglected to work out and write down the practical differences between OAuth2 and OIDC.

WebExternalAuthenticationPower written at 23:37:55; Add Comment

2024-05-09

One of OCSP's problems is the dominance of Chrome

To simplify greatly, OCSP is a set of ways to check whether or not a (public) TLS certificate has been revoked. It's most commonly considered in the context of web sites and things that talk to them. Today I had yet another problem because something was trying to check the OCSP status of a website and it didn't work. I'm sure there's a variety of contributing factors to this, but it struck me that one of them is that Chrome, the dominant browser, doesn't do OCSP checks.

If you break the dominant browser, people notice and fix it; indeed, people prioritize testing against the dominant browser and making sure that things are going to work before you put them in production. But if something is not supported in the dominant browser, it's much less noticeable if it breaks. And if something breaks in a way that doesn't affect even less well used browsers (like Firefox), the odds of it being noticed are even lower. Something in the broad network environment broke OCSP for wget, but perhaps not for browsers? Good luck having that noticed, much less fixed.

Of course this leads to a spiral. When people run into OCSP problems on less common platforms, they can either try to diagnose and fix the problem (if fixing it is even within their power), or they can bypass or disable OCSP. Often they'll chose the latter (as I did), at which point they increase the number of non-OCSP people in the world and so further reduce the chances of OCSP problems being noticed and fixed. For instance, I couldn't cross-check the OCSP situation with Firefox, because I'd long ago disabled OCSP in Firefox after it caused me problems there.

I don't have any particular solutions, and since I consider OCSP to basically be a failure in practice I'm not too troubled by the problem, at least for OCSP.

PS: In this specific situation, OCSP was vanishingly unlikely to actually be telling me that there was a real security problem. If Github had to revoke any of its web certificates due to them being compromised, I'm sure I would have heard about it because it would be very big news.

OCSPVersusDominantBrowser written at 23:23:35; Add Comment

2024-04-18

On the duration of self-signed TLS (website) certificates

We recently got some hardware that has a networked management interface, which in today's world means it has a web server and further, this web server does HTTPS. Naturally, it has a self-signed TLS certificate (one it apparently generated on startup). For reasons beyond the scope of this entry we decided that we wanted to monitor this web server interface to make sure it was answering. This got me curious about how long the duration of its self-signed TLS certificate was, which turns out to be one year. I find myself not sure how I feel about this.

On the one hand, it is a little bit inconvenient for us that the expiry time isn't much longer. Our standard monitoring collects the TLS certificate expiry times of TLS certificates we encounter and we generate alerts for impending TLS certificate expiry, so if we don't do something special for this hardware, in a year or so we'll be robotically alerting that these self signed TLS certificates are about to 'expire'.

On the other hand, browsers don't actually care about the nominal expiry date of self-signed certificates; either your browser trusts them (because you told it to) or it doesn't, and the TLS certificate 'expiring' won't change this (or at most will make your browser ask you again if you want to trust the TLS certificate). We have server IPMIs with self-signed HTTPS TLS certificates that expired in 2020, and I've never noticed when I talked to them. Also, it's possible that (some) modern browsers will be upset with long-duration self-signed TLS certificates in the same way that they limit the duration of regular website TLS certificates. I haven't actually generated a long duration self-signed TLS certificate to test.

(It's possible that we'll want to talk to a HTTP API on these management interfaces with non-browser tools. However, since expired TLS certificates are probably very common on this sort of management interface, I suspect that the relevant tools also don't care that a self-signed TLS certificate is expired.)

I'm probably not going to do anything to the actual devices, although I suspect I could prepare and upload a long duration self-signed certificate if I wanted to. I will hopefully remember to fix our alerts to exclude these TLS certificates before this time next year.

PS: The other problem with long duration self-signed TLS certificates is that even if browsers accept them today, maybe they won't be so happy with them in a year or three. The landscape of what browsers will accept is steadily changing, although perhaps someday it will reach a steady state.

TLSSelfSignedCertsDuration written at 23:15:36; Add Comment

2024-04-13

A corner case in Firefox's user interface for addon updates

One of the things that make browsers interesting programs, in one sense, is that they generally have a lot of options, which leads to a lot of potential different behavior, which creates plenty of places for bugs to hide out. One of my favorite ones in Firefox is a long-standing user interface bug in Firefox's general UI for addon updates, one that's hard for ordinary people to come across because it requires a series of unusual settings and circumstances.

If you have automatic updates for your addons turned off, Firefox's 'about:addons' interface for managing your extensions will still periodically check for updates to your addons, and if there are any, it will add an extra tab in the UI that lists addons with pending updates. This tab has a count of how many pending updates there are, because why not. The bug is that if the same addon comes out with more than one update (that Firefox notices) before you update it, this count of pending updates (and the tab itself) will stick at one or more, even though there are no actual addon updates that haven't been applied.

(You can argue that there are actually two bugs here, and that Firefox should be telling you the number of addons with pending updates, not the number of pending updates to addons. The count is clearly of pending updates, because if Firefox has seen two updates for one addon, it will report a count of '2' initially.)

To reach this bug you need a whole series of unusual circumstances. You need to turn off automatic addon updates, you have to be using an addon that updates frequently, and then you need to leave your Firefox running for long enough (and not manually apply addon updates), because Firefox re-counts pending updates when it's restarted. In my case, I see this because I'm using the beta releases of uBlock Origin, which update relatively frequently, and even then I usually see it only on my office Firefox, which I often leave running but untouched for four days at a time.

(It may be possible to see this with a long-running Firefox even if you have addon updates automatically applied, because addon updates sometimes ask you to opt in to applying them right now instead of when Firefox restarts. I believe an addon asking this question may stop further updates to the addon from being applied, leading to the same 'multiple pending updates' counting issue.)

Browsers are complex programs with a large set of UI interactions (especially when it comes to handling modern interactive web pages and their Javascript). In a way it's a bit of a miracle that they work as well as they do, and I'm sure that there's other issues like this in the less frequented parts of all browsers.

FirefoxAddonsUpdatesGlitch written at 22:56:38; Add Comment

2024-03-29

Some notes on Firefox's media autoplay settings in practice as of Firefox 124

I've been buying digital music from one of the reasonably good online sources of it (the one that recently got acquired, again, making people nervous about its longer term future). In addition to the DRM-free lossless downloads of your purchases, this place lets you stream albums through your web browser, which in my case is an instance of Firefox. Recently, I noticed that my Firefox instance at work would seamlessly transition from one track to the next track of an album I was streaming, regardless of which label's sub-site I was on, while my home Firefox would not; when one track ended, the home Firefox would normally pause rather than start playing the next track.

(I listen to both albums I've purchased and albums I haven't and I'm just checking out. The former play through the main page for my collection, while the latter are scattered around various URLs, because each label or artist gets a <label>.<mumble>.com sub-domain for its releases, and then each release has its own page. For the obvious reasons, I long ago set my home Firefox to allow my collection's main page to autoplay music so it could seamlessly move from one track to the next.)

Both browser instances were set to disallow autoplay in general in the Preferences → Privacy & Security (see Allow or block media autoplay in Firefox), and inspection of the per-site settings showed that my work Firefox actually had granted no autoplay permissions to sites while my home Firefox had a list of various subdomains for this particular vendor that were allowed to autoplay. After spelunking my about:config, I identified this as a difference in media.autoplay.blocking_policy, where the work Firefox had this set to the default of '0' while my home Firefox had a long-standing setting of '2'.

As discussed in Mozilla's wiki page on blocking media autoplay, the default setting for this preference allows autoplay once you've interacted with that tab, while my setting of '2' requires that you always click to (re)start audio or video playing (unless the site has been granted autoplay permissions). Historically I set this to '2' to try to stop Youtube from autoplaying a second video after my first one had finished. In practice this usage has been rendered functionally obsolete by Youtube's own 'disable autoplay' setting in its video player (although it still works to prevent autoplay if I've forgotten to turn that on in this Firefox session or if Youtube is in a playlist and ignores that setting).

(For both Youtube and this digital music source, a setting of '1', a transient user gesture activation, is functionally equivalent to '2' for me because it will normally be more than five seconds before the video or track finishes playing, which means that the permission will have expired by the time the site wants to advance to the next thing.)

Since I check out multi-track albums much more often than I look at Youtube videos (in this Firefox), and Youtube these days does have a reliable 'disable autoplay' setting, I opted to leave media.autoplay.blocking_policy set to '0' in the work Firefox instance I use for this stuff and I've just changed it to '0' in my home one as well. I could avoid this if I set up a custom profile for this music source, but I haven't quite gotten to that point yet.

(I do wish Firefox allowed, effectively, per-site settings of this as part of the per-site autoplay permissions, but I also understand why they don't; I'm sure the Preferences and per-site settings UI complexity would be something to see.)

(If I'd thought to check my previous notes on this I probably would have been led to media.autoplay.blocking_policy right away, but it didn't occur to me to check here, even though I knew I'd fiddled a lot with Firefox media autoplay over the years. My past self writing things down here doesn't guarantee that my future (present) self will remember that they exist.)

PS: I actually go back and forth on automatically moving on to the next track of an album I'm checking out, because the current 'stop after one track' behavior does avoid me absently listening to the whole thing. If I find myself unintentionally listening to too many albums that in theory I'm only checking out, I'll change the setting back.

FirefoxMediaAutoplaySettingsIV written at 22:43:25; Add Comment

2024-03-12

What do we count as 'manual' management of TLS certificates

Recently I casually wrote about how even big websites may still be manually managing TLS certificates. Given that we're talking about big websites, this raises a somewhat interesting question of what we mean by 'manual' and 'automatic' TLS certificate management.

A modern big website probably has a bunch of front end load balancers or web servers that terminate TLS, and regardless of what else is involved in their TLS certificate management it's very unlikely that system administrators are logging in to each one of them to roll over its TLS certificate to a new one (any more than they manually log in to those servers to deploy other changes). At the same time, if the only bit of automation involved in TLS certificate management is deploying a TLS certificate across the fleet (once you have it) I think most people would be comfortable still calling that (more or less) 'manual' TLS certificate management.

As a system administrator who used to deal with TLS certificates (back then I called them SSL certificates) the fully manual way, I see three broad parts to fully automated management of TLS certificates:

  • automated deployment, where once you have the new TLS certificate you don't have to copy files around on a particular server, restart the web server, and so on. Put the TLS certificate in the right place and maybe push a button and you're done.

  • automated issuance of TLS certificates, where you don't have to generate keys, prepare a CSR, go to a web site, perhaps put in your credit card information or some other 'cost you money' stuff, perhaps wait for some manual verification or challenge by email, and finally download your signed certificate. Instead you run a program and you have a new TLS certificate.

  • automated renewal of TLS certificates, where you don't have to remember to do anything by hand when your TLS certificates are getting close enough to their expiry time. (A lesser form of automated renewal is automated reminders that you need to manually renew.)

As a casual thing, if you don't have fully automated management of TLS certificates I would say you had 'manual management' of them, because a human had to do something to make the whole process go. If I was trying to be precise and you had automated deployment but not the other two, I might describe you as having 'mostly manual management' of your TLS certificates. If you had automated issuance (and deployment) but no automated renewals, I might say you had 'partially automated' or 'partially manual' TLS certificate management.

(You can have automated issuance but not automated deployment or automated renewal and at that point I'd probably still say you had 'manual' management, because people still have to be significantly involved even if you don't have to wrestle with a TLS Certificate Authority's website and processes.)

I believe that at least some TLS Certificate Authorities support automated issuance of year long certificates, but I'm not sure. Now that I've looked, I'm going to have to stop assuming that a website using a year-long TLS certificate is a reliable sign that they're not using automated issuance.

TLSCertsWhatIsManual written at 22:29:15; Add Comment

2024-02-18

Even big websites may still be manually managing TLS certificates (or close)

I've written before about how people's soon to expire TLS certificates aren't necessarily a problem, because not everyone manages their TLS certificates through Let's Encrypt like '30 day in advance automated renewal' and perhaps short-lived TLS certificates. For example, some places (like Facebook) have automation but seem to only deploy TLS certificates that are quite close to expiry. Other places at least look as if they're still doing things by hand, and recently I got to watch an example of that.

As I mentioned yesterday, the department outsources its public website to a SaaS CMS provider. While the website has a name here for obvious reasons, it uses various assets that are hosted on sites under the SaaS provider's domain names (both assets that are probably general and assets, like images, that are definitely specific to us). For reasons beyond the scope of this entry, we monitor the reachability of these additional domain names with our metrics system. This only checks on-campus reachability, of course, but that's still important even if most visitors to the site are probably from outside the university.

As a side effect of this reachability monitoring, we harvest the TLS certificate expiry times of these domains, and because we haven't done anything special about it, they get show on our core status dashboard along side the expiry times of TLS certificates that we're actually responsible for. The result of this was that recently I got to watch their TLS expiry times count down to only two weeks away, which is lots of time from one view while also alarmingly little if you're used to renewals 30 days in advance. Then they flipped over to new a new year-long TLS certificate and our dashboard was quiet again (except for the next such external site that has dropped under 30 days).

Interestingly, the current TLS certificate was issued about a week before it was deployed, or at least its Not-Before date is February 9th at 00:00 UTC and it seems to have been put into use this past Friday, the 16th. One reason for this delay in deployment is suggested by our monitoring, which seems to have detected traces of a third certificate sometimes being visible, this one expiring June 23rd, 2024. Perhaps there were some deployment challenges across the SaaS provider's fleet of web servers.

(Their current TLS certificate is actually good for just a bit over a year, with a Not-Before of 2024-02-09 and a Not-After of 2025-02-28. This is presumably accepted by browsers, even though it's bit over 365 days; I haven't paid attention to the latest restrictions from places like Apple.)

TLSCertsSomeStillManual written at 22:06:08; Add Comment

2024-02-17

We outsource our public web presence and that's fine

I work for a pretty large Computer Science department, one where we have the expertise and need to do a bunch of internal development and in general we maintain plenty of things, including websites. Thus, it may surprise some people to learn that the department's public-focused web site is currently hosted externally on a SaaS provider. Even the previous generation of our outside-facing web presence was hosted and managed outside of the department. To some, this might seem like the wrong decision for a department of Computer Science (of all people) to make; surely we're capable of operating our own web presence and thus should as a matter of principle (and independence).

Well, yes and no. There are two realities. The first is that a modern content management system is both a complex thing (to develop and to generally to operate and maintain securely) and a commodity, with many organizations able to provide good ones at competitive prices. The second is that both the system administration and the publicity side of the department only have so many people and so much time. Or, to put it another way, all of us have work to get done.

The department has no particular 'competitive advantage' in running a CMS website; in fact, we're almost certain to be worse at it than someone doing it at scale commercially, much like what happened with webmail. If the department decided to operate its own CMS anyway, it would be as a matter of principle (which principles would depend on whether the CMS was free or paid for). So far, the department has not decided that this particular principle is worth paying for, both in direct costs and in the opportunity costs of what that money and staff time could otherwise be used for.

Personally I agree with that decision. As mentioned, CMSes are a widely available (but specialized) commodity. Were we to do it ourselves, we wouldn't be, say, making a gesture of principle against the centralization of CMSes. We would merely be another CMS operator in an already crowded pond that has many options.

(And people here do operate plenty of websites and web content on our own resources. It's just that the group here responsible for our public web presence found it most effective and efficient to use a SaaS provider for this particular job.)

OutsourcedWebCMSSensible written at 21:39:20; Add Comment

2024-01-23

CGI programs have an attractive one step deployment model

When I wrote about how CGI programs aren't particularly slow these days, one of the reactions I saw was to suggest that one might as well use a FastCGI system to run your 'CGI' as a persistent daemon, saving you the overhead of starting a CGI program on every request. One of the practical answers is that FastCGI doesn't have as simple a deployment model as CGIs generally offer, which is part of their attractions.

With many models of CGI usage and configuration, installing a CGI, removing a CGI, or updating it is a single-step process; you copy a program into a directory, remove it again, or update it. The web server notices that the executable file exists (sometimes with a specific extension or whatever) and runs it in response to requests. This deployment model can certainly become more elaborate, with you directing a whole tree of URLs to a CGI, but it doesn't have to be; you can start very simple and scale up.

It's theoretically possible to make FastCGI deployment almost as simple as the CGI model, but I don't know if any FastCGI servers and web servers have good support for this. Instead, FastCGI and in general all 'application server' models almost always require at least a two step configuration, where you to configure your application in the application server and then configure the URL for your application in your web server (so that it forwards to your application server). In some cases, each application needs a separate server (FastCGI or whatever other mechanism), which means that you have to arrange to start and perhaps monitor a new server every time you add an application.

(I'm going to assume that the FastCGI server supports reliable and automatic hot reloading of your application when you deploy a change to it. If it doesn't then that gets more complicated too.)

If you have a relatively static application landscape, this multi-step deployment process is perfectly okay since you don't have to go through it very often. But it is more involved and it often requires some degree of centralization (for web server configuration updates, for example), while it's possible to have a completely distributed CGI deployment model where people can just drop suitably named programs into directories that they own (and then have their CGI run as themselves through, for example, Apache suexec). And, of course, it's more things to learn.

(CGI is not the only thing in the web language landscape that has this simple one step deployment model. PHP has traditionally had it too, although my vague understanding is that people often use PHP application servers these days.)

PS: At least on Apache, CGI also has a simple debugging story; the web server will log any output your CGI sends to standard error in the error log, including any output generated by a total failure to run. This can be quite useful when inexperienced people are trying to develop and run their first CGI. Other web servers can sometimes be less helpful.

CGIOneStepDeployment written at 22:55:08; Add Comment

(Previous 10 or go back to January 2024 at 2024/01/08)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.