Wandering Thoughts

2020-01-13

We may not want to use OCSP stapling in our web servers

Not too long ago, I was modernizing some of our Apache TLS settings. As I usually do, I went to the Mozilla SSL configuration generator (which is what I think you should do) and more or less accepted its recommendations for our Apache and OpenSSL version. These days, that includes OCSP stapling. I didn't really think much about this. Then some things happened recently, and today my co-workers asked more or less if we should be doing OCSP stapling at all. Unfortunately, the more I think about that question, the more I think that we should probably not configure OCSP stapling in our web servers.

One big motivation for OCSP stapling was OCSP must-staple, but that seems basically dead; I can find people talking about it, but I can't find any site actually using it, including people who set it up once upon a time. I'm not surprised by this, because OCSP must-staple can easily break your website if anything goes wrong with OCSP stapling and people generally don't like that sort of 'feature'. People may try it out, but sooner or later something explodes and then they stop. We're certainly never going to use OCSP must-staple unless it becomes a hard requirement in a popular browser.

If you're not using OCSP must-staple, the major thing that OCSP stapling means is that browsers that make OCSP checks can save some time when they connect to you. Desktop Firefox definitely does OCSP checks by default (although I believe mobile Firefox only does it for EV certificates), and Chrome definitely doesn't. I'm not sure about the state of Safari, either on iOS or OS X (although see this ssl.com article). Unfortunately, Firefox is no longer a major desktop browser, so how much OCSP stapling matters likely depends mostly on how Safari behaves.

(Currently, Apple appears to use OCSP stapling information under some circumstances, but it's not the only way to provide what they want.)

Pragmatically, a lot of sites that I would have expected to use OCSP stapling don't actually do so. None of Mozilla, Let's Encrypt, Twitter, Facebook, or Apple use OCSP stapling, although Apple's support site currently does, as does Amazon (and Amazon Canada). Infrequent use of OCSP stapling (and OCSP stapled data) matters, because the less it's used (in Apache and elsewhere), the more chances there are for all of the code related to it to have issues, especially in Apache.

On the whole, the more I look at the combination of the benefits of OCSP stapling, the potential hazards, and the limited use I can see out in the field (even and especially from places that I'd expect to support it), the less I feel like we should be using OCSP stapling. OCSP stapling seems to be at least a little bit bleeding edge, not well established practice, and there's no need for us to live out there.

OCSPStaplingMaybeNot written at 22:22:37; Add Comment

2020-01-12

How I now think you want to configure Apache for OCSP stapling

When I initially set up OCSP stapling while I was modernizing our Apache TLS configurations, I followed the standard setup from the Mozilla SSL configuration generator (as is my usual habit). For OCSP stapling, the configuration this generates was (and is) just:

SSLUseStapling On
SSLStaplingCache "shmcb:logs/ssl_stapling(32768)"

Then recently our web servers at work couldn't get a good answer from Let's Encrypt's OCSP servers, for reasons that aren't clear to us. Some people experienced issues where their Firefox would refuse to connect to these web servers, rejecting things with a SEC_ERROR_OCSP_TRY_SERVER_LATER error.

(It's possible that this actually comes from Firefox itself directly querying the LE OCSP servers. These people were inside the same networks as the web servers and our problem could well have been a firewall or network reachability issue to LE's OCSP servers, instead of an issue on LE's side.)

This experience made me look into what happens when OCSP stapling runs into errors, and also into the Apache documentation. The result of this is that I now think that you should always tell Apache to not return OCSP errors, by also adding the Apache configuration option for this:

SSLStaplingReturnResponderErrors off

(The default is 'on'.)

This does the most aggressive version of handling OCSP problems; if set to off, the documentation says 'only responses indicating a certificate status of "good" will be included in the TLS handshake'. Expired responses, any errors, and any other certificate status causes Apache to not include OCSP stapling information at all.

(You may also want to see this Apache bug.)

PS: Firefox still defaults to checking certificate status through OCSP if necessary, but you can change this if you want to. The normal preferences only let you turn this off entirely, but if you go into about:config and set security.OCSP.enabled to the value of '2', Firefox will do OCSP checks for EV certificates but not for normal ones. Given the increasing disuse of EV certificates, I don't think it's worth bothering; just turn off OCSP checking entirely.

ApacheOCSPStaplingSettings written at 01:08:34; Add Comment

2020-01-10

OCSP stapling and what web servers and browsers do in the face of errors

OCSP is an attempt to solve some of the problems of certificate revocation by having your browser or other TLS client check if TLS certificates are (still) valid when it sees them. OCSP stapling is an attempt to fix the privacy, performance, and infrastructure problems that OCSP creates by having the web server include ('staple') a sufficiently recent proof of its good OCSP certificate status in the TLS handshake; this proof is requested by the web server from its Certificate Authority's OCSP server every so often. As they say, so far so good. However, this raises two questions: what should the web server do if it can't get the OCSP proof from its CA when it asks, and what should browsers do in reaction to that?

It's possible to mark TLS certificates as 'OCSP must-staple', which means that the certificate is asserting that you should always see it with a stapled OCSP proof. If you're using such a certificate, it shouldn't matter what the web server does if it can't get an OCSP reply; no matter what, browsers should refuse to accept the certificate without it (whether they actually do is another matter, cf). However, most TLS certificates are not marked this way for various reasons (including that it's kind of dangerous because your CA can cut you off, either deliberately or accidentally).

For ordinary TLS certificates without OCSP must-staple, the web server has a choice of what to do in its conversation with the browser when it can't get a positive signed OCSP response from the CA. Either it can not include any OCSP stapling information at all, or it can pass on an OCSP error indication (or the signed OCSP response that says 'bad certificate' or 'unknown'). If the web server passes on OCSP errors to the browser, the browser may ignore them and continue with TLS as usual, ignore them and make its own OCSP query (even if it might not normally do an OCSP query), or report a TLS error and abort the connection.

(If the web server passes on a signed OCSP server response of 'bad certificate' or 'unknown', probably the browser should respect it.)

The safest thing for a web server to do is to only ever pass on a positive OCSP response. If your CA's OCSP server ever says anything other than 'your certificate is good', you don't say anything to the client (although if it says 'bad certificate' or 'unknown', you probably want to raise loud alarms in your monitoring system). In a perfect world, CA OCSP servers would always be operational and reliable, but in this world they aren't, so for most websites it's more likely that the CA's OCSP server has a problem than that you have a genuinely bad TLS certificate. When you're silent, at worst the browser will make its own OCSP query and get the same result, and at best either it won't query at all or it will query and get a good result.

As far as browsers go, Firefox will sometimes (or perhaps always) abort the TLS connection if it receives back certain OCSP errors, not just when it gets signed OCSP replies that contain bad statuses. For instance, if it receives an OCSP 'try later' status from the web server, it errors out with SEC_ERROR_OCSP_TRY_SERVER_LATER. This is not necessarily the greatest thing ever, because the 'try later' OCSP error is unsigned and so doesn't necessarily actually come from the CA's OCSP server. For that matter, the web server may decide to make up a 'try later' OCSP error status if it has some problem talking to the OCSP server. Chrome appears to ignore such 'try later' OCSP responses at least some of the time.

(Both web servers and browsers talk to OCSP servers over HTTP, not HTTPS, so they're vulnerable to man in the middle attacks for any responses that aren't signed. And of course the web server can give you any unsigned error status it wants to.)

PS: As far as OCSP must-staple goes, it appears that Cloudflare's blog doesn't use it any more, despite their 2017 blog entry pushing for it. Neither does Scott Helme. At this point I'd like to find a site that does use it just so I can check which of my tools actually report that.

OCSPStaplingAndErrors written at 23:39:37; Add Comment

2020-01-08

Why I use both uBlock Origin and uMatrix

In response to my entry on my current Firefox addons, hwj asked a good question in the lobste.rs comments:

Isn’t uMatrix an advanced version of uBlock Origin? What’s the rationale behind using both of them?

While it's true that uMatrix and uBlock Origin have overlapping functionality (and are written by the same person), they have different purposes and focuses. uBlock Origin's focus is blocking ads and other undesired things as an out of the box experience with little configuration needed. uMatrix's focus is on exerting tight and highly specific control over what resources a page is allowed to load and use, including Javascript and cookies (and requires a lot of configuration).

(One significant difference in features is that uBlock Origin can remove HTML elements from HTML pages, while uMatrix has no support for this. Selectively removing HTML elements is extremely important for blocking ads, but it's not relevant if you're blocking entire HTTP requests. I believe that uMatrix's HTML modifications are limited to blocking inline Javascript.)

You can block Javascript with uBlock Origin and it's somewhat easier in simpler cases than using uMatrix, but you don't have the fine control over Javascript that uMatrix gives you (and this improves my experience of the web). Nor do you get the control over cookies and other types of resources, which is a deliberate simplification on uBlock Origin's part. At the same time, uMatrix doesn't give you sophisticated adblocking or things like making unwanted page elements go away, including those annoying permanent headers and footers.

So the reason that I use both of them is that they do different things for me. uBlock Origin removes ads and unwanted page elements, while uMatrix blocks Javascript, cookies, and so on. If I just wanted adblocking, element zapping, and blocking Javascript, I could probably use uBlock Origin alone, but I definitely want cookie blocking as well and I usually like the fine-grained control uMatrix gives me over other things as well.

(Writing this has given me a new appreciation for the difference between the blocklist sources included in uMatrix and the filter lists included in uBlock Origin. The blocklists uMatrix uses just list hosts, because that's what uMatrix deals with. uBlock Origin's filter lists include all sorts of sophisticated matching rules to make HTML elements disappear, as well as some host lists. Having just checked it now, I believe that all of the default uMatrix hosts lists are also in uBlock Origin, although they may not all be enabled by default.)

UBlockOriginAndUMatrix written at 21:32:46; Add Comment

My Firefox addons as of Firefox '74' (the current development version)

As I write this, Firefox 72 is the just released version of Firefox and 73 is in beta, but my primary Firefox is still a custom hacked version that I build from the development tree, so it most closely corresponds to what will be released as Firefox 74 in a certain amount of time (I've lost track of how fast Firefox makes releases). Since it's been about ten versions of Firefox (and more than a year) since the last time I covered my addons, it's time for another revisit of this perennial topic. Many of the words will be familiar from the last time, because my addons seem to have stabilized now.

My core addons, things that I consider more or less essential for my experience of Firefox, are:

  • Foxy Gestures (Github) is probably still the best gestures extension for me for modern versions of Firefox (but I can't say for sure, because I no longer investigate alternatives).

    (I use some custom gestures in my Foxy Gestures configuration that go with some custom hacks to my Firefox to add support for things like 'view page in no style' as part of the WebExtensions API.)

  • uBlock Origin (Github) is my standard 'block ads and other bad stuff' extension, and also what I use for selectively removing annoying elements of pages (like floating headers and footers).

  • uMatrix (Github) is my primary tool for blocking Javascript and cookies. uBlock Origin could handle the Javascript, but not really the cookies as far as I know, and in any case uMatrix gives me finer control over Javascript which I think is a better fit with how the web does Javascript today.

  • Cookie AutoDelete (Github) deals with the small issue that uMatrix doesn't actually block cookies, it just doesn't hand them back to websites. This is probably what you want in uMatrix's model of the world (see my entry on this for more details), but I don't want a clutter of cookies lingering around, so I use Cookie AutoDelete to get rid of them under controlled circumstances.

    (However unaesthetic it is, I think that the combination of uMatrix and Cookie AutoDelete is necessary to deal with cookies on the modern web. You need something to patrol around and delete any cookies that people have somehow managed to sneak in.)

  • Stylus has become necessary for me after Google changed their non-Javascript search results page to basically be their Javascript search results without Javascript, instead of the much nicer and more useful old version. I use Stylus to stop search results escaping off the right side of my browser window.

Additional fairly important addons that would change my experience if they weren't there:

  • Textern (Github) gives me the ability to edit textareas in a real editor. I use it all the time when writing comments here on Wandering Thoughts, but not as much as I expected on other places, partly because increasingly people want you to write things with all of the text of a paragraph run together in one line. Textern only works on Unix (or maybe just Linux) and setting it up takes a bit of work because of how it starts an editor (see this entry), but it works pretty smoothly for me.

    (I've changed its key sequence to Ctrl+Alt+E, because the original Ctrl+Shift+E no longer works great on Linux Firefox; see issue #30. Textern itself shifted to Ctrl+Shift+D in recent versions.)

  • Open in Browser (Github) allows me to (sometimes) override Firefox's decision to save files so that I see them in the browser instead. I mostly use this for some PDFs and some text files. Sadly its UI isn't as good and smooth as it was in pre-Quantum Firefox.

  • Cookie Quick Manager (Github) allows me to inspect, manipulate, save, and reload cookies and sets of cookies. This is kind of handy every so often, especially saving and reloading cookies.

The remaining addons I use I consider useful or nice, but not all that important on the large scale of things. I could lose them without entirely noticing the difference in my Firefox:

  • Certainly Something (Github) is my TLS certificate viewer of choice. I occasionally want to know the information it shows me, especially for our own sites.

  • HTTP/2 Indicator (Github) does what it says; it provides a little indicator as to whether HTTP/2 was active for the top-level page.

  • Link Cleaner cleans the utm_ fragments and so on out of URLs when I follow links. It's okay; I mostly don't notice it and I appreciate the cleaner URLs.

    (It also prevents some degree of information leakage to the target website about where I found their link, but I don't really care about that. I'm still sending Referer headers, after all.)

  • HTTPS Everywhere, basically just because. But in a web world where more and more sites are moving to using things like HSTS, I'm not sure HTTPS Everywhere is all that important any more.

Some of my previous extensions have stopped being useful since last time. They are:

  • My Google Search URL Fixup, because Google changed its search pages (as covered above for Stylus) and it became both unnecessary and non-functional. I should probably update its official description to note this, but Google's actions made me grumpy and lazy.

  • Make Medium Readable Again (also, Github) used to deal with a bunch of annoyances for Medium-hosted stuff, but then Medium changed their CSS and it hasn't been updated for that. I can't blame the extension's author; keeping up with all of the things that sites like Medium do to hassle you is a thankless and never-ending job.

I still have both of these enabled in my Firefox, mostly because it's more work to remove them than to let them be. In the case of MMRA, perhaps its development will come back to life again and a new version will be released.

(There are actually some branches in the MMRA Github repo and a bunch of forks of it, some of which are ahead of the main one. Possibly people are quietly working away on this.)

I have some Firefox profiles that are for when I want to use Javascript (they actually use the official Mozilla Linux Firefox release these days, which I just updated to Firefox 72). In these profiles, I also use Decentraleyes (also), which is a local CDN emulation so that less of my traffic is visible to CDN operators. I don't use it in my main Firefox because I'm not certain how it interacts with me blocking (most) Javascript setup, and also much of what's fetched from CDNs is Javascript, which obviously isn't applicable to me.

(There are somewhat scary directions in the Decentraleyes wiki on making it work with uMatrix. I opted to skip them entirely.)

Firefox74Addons written at 01:56:48; Add Comment

2019-12-17

Browsers and the relative size of their default monospace fonts

One of the unusual things about Wandering Thoughts and all of my web stuff is that it's almost completely unstyled. In particular, I don't try to specify HTML fonts or HTML font sizes; I leave it entirely up to your browser, on the assumption that your browser knows more about good typography on your system than I do. However, this doesn't mean that my entries live in a pure world of semantic content, because in practice you can't divorce content from its presentation; how Wandering Thoughts looks affects what and how I write. One of the consequences of this is that I end up making some tacit assumptions about how browsers handle default fonts, assumptions that may no longer be correct (if they ever were).

Following a convention that I believe I coped from Unix manpages, I often write entries that intermix normal non-monospaced text with bits of monospaced text, which I use for a whole variety of things ranging from code snippets through Unix commands and filenames or just things I intend as literal text. This is a reasonably common convention in general, but when I use it I'm implicitly relying on browsers rendering normal text and monospaced text in something that is the same size or reasonably close to it. If a monospaced word is either tiny or gigantic compared to the normal text around it, the two no longer combine together well and any number of my entries are going to look weird (or be hard to read).

Unfortunately, it increasingly seems like browsers are not doing this and that they often have their default monospaced font be clearly smaller than their normal text font, sometimes noticeably so. This is of course platform dependent, and I think that common platforms still have the two fonts sufficiently close together in size that my entries don't look glaringly wrong. But I don't know how long that will last or how true it is for people who've tried to change their browser's default size (to make it either bigger or smaller).

If mixing normal text and monospaced text is increasingly risky, this is going to affect what I do here on Wandering Thoughts in some way or another. Most likely I will have to start moving away from mixing monospaced words into my regular paragraphs and confine them to their own out of line sections.

(This is a good part of why I care about Firefox's peculiar handling of font choice preferences. By default Firefox's monospaced font size is too small for me, especially once I adjust the size of its regular font.)

As a side note, this isn't just a browser issue, unless you take an expansive view of browsers. I also run into this in my syndication feed reader of choice, which of course also has to render HTML and also has to chose default sizes for normal and monospaced fonts. Compounding the issue, these programs often give you less control over fonts and HTML rendering in general than browsers do.

BrowserMonospaceSizes written at 00:18:24; Add Comment

2019-12-15

Peering into the depths of (presumed) website vulnerability probing

One of the unusual things that DWiki (the software behind here) does is that it rejects HTTP GET requests with unknown query parameters and logs them. Usually what this reports is yet another thing using yet another query parameter for analytics tracking, like Facebook and their 'fbclid=...' parameter (cf), which is a good part of why I no longer recommend being cautious in your web app. But every so often something else turns up, something that looks a lot like people probing for vulnerable web applications.

Recently, I've seen moderate number of requests here with some interesting invalid query parameters tacked on the end, for example:

GET /~cks/space/blog/tech/?mid=qna&act=dispBoardWrite

The bad query parameters are the 'mid=qna&act=dispBoardWrite' bit.

Unlike normal browsers following links with bad query parameters, the IPs requesting these URLs don't go on to request my CSS or any other resources (such as a site favicon). The direct requests tend to have HTTP Referers of other pages here, and sometimes there are POST requests for URLs like '/~cks/space/blog/tech/index.php' with a HTTP Referer of a page here that includes these query parameters.

Some casual Internet searches suggest that this may be an attempt to explode something called 'XPress Engine', which is apparently a Korean PHP based CMS (cf and related pages). On the other hand, Google has also indexed a bunch of pages with these to query parameters in them (often along with a 'page=NN' additional parameter). So my overall conclusion is that I don't really know what's under this rock.

(Over the past ten days, 29 different IPs have tried to poke me this way. A number of them have SBL listings, specifically SBL 224619, SBL 214239, and SBL 211023. It turns out that I'd already blocked all of the IP ranges from these SBL listings, so those probes weren't getting anywhere in the first place.)

SeeingSomeWebProbing written at 01:48:08; Add Comment

2019-12-09

Firefox's peculiar handling of font choice preferences

On Twitter, I said:

I really don't understand how Firefox decides which particular 'Fonts for ...' preference controls the font sizes of any particular web page. They also seem to keep changing them.

If you look at Firefox's Preferences 'General' tab and scroll down, you'll discover a 'Fonts and Colors' preference which offers you the chance to set your default font and its size, and also has an 'Advanced...' option. As it happens, that default font is basically an illusion. If you go into the Advanced option, you'll discover that you can set fonts and font sizes for a whole range of, well, things (it's not quite languages). In particular you can set them for both 'Latin' and 'Other Writing Systems', and they can be different (and will be, if you only change one).

All of this matters because as far as I know there is no master preference that overrides everything. You cannot say 'in all things, I want my normal font to be X size and my monospace font to be Y size'. Instead you have to set this one by one in as many writing systems or languages as you think Firefox will ever decide web pages are in. If you miss one, your font sizes (and even font choices) can wind up weird and unsightly on some web pages.

On top of that, Firefox seems to sometimes change its view of what 'writing system' a particular web page is in even if nothing in particular changes in the web page itself. Wandering Thoughts has been serving pages with the exact same lack of language and writing system information for years, but I'm pretty sure that at some point some versions of Firefox flipped between using my 'Latin' font choices and my 'Other Writing Systems' ones (or, at the time, my non-choices for the latter; I hadn't set them). In addition, I don't think there's any easy way to see what font preference Firefox is using for any particular web page. You can see what character set encoding it thinks a page is in (and change it), but that's not the same thing as the 'writing system'.

(The writing system is also not the same thing as the language set in HTML attributes, but I suspect that Firefox generally derives the writing system from the language, if there is one.)

Unfortunately I suspect that none of this will ever really change. In the modern web, most websites explicitly set all of their font choices and font sizes, so these preferences are probably only ever used on a minority of websites, and the Firefox developers have higher priority work to do. Font preferences certainly wouldn't be the only bit of Firefox preferences and UI that are quietly being neglected that way.

(Why I care about this as much as I do is a discussion for another entry.)

FirefoxFontChoicePreferences written at 01:24:55; Add Comment

2019-11-13

How to make a rather obnoxiously bad web spider the easy way

On Twitter, I recently said:

So @SemanticVisions appears to be operating a rather obnoxiously bad web crawler, even by the low standards of web crawlers. I guess I have the topic of today's techblog entry.

This specific web spider attracted my attention in the usual way, which is that it made a lot of requests from a single IP address and so appeared in the logs as by far the largest single source of traffic on that day. Between 6:45 am local time and 9:40 am local time, it made over 17,000 requests; 4,000 of those at the end got 403s, which gives you some idea of its behavior.

However, mere volume was not enough for this web spider. Instead it elevated itself with a novel new behavior I have never seen before. Instead of issuing a single GET request for each URL it was interested in, it seems to have always issued the following three requests:

[11/Nov/2019:06:54:03 -0500] "HEAD /~cks/space/<A-PAGE> HTTP/1.1" [...]
[11/Nov/2019:06:54:03 -0500] "HEAD /~cks/space/<A-PAGE> HTTP/1.1" [...]
[11/Nov/2019:06:54:04 -0500] "GET /~cks/space/<A-PAGE> HTTP/1.1" [...]

In other words, in immediate succession (sometimes in the same second, sometimes crossing a second boundary as here) it issued two HEAD requests and then a GET request, all for the same URL. For a few URLs, it came back and did the whole sequence all over again a short time later for good measure.

In the modern web, issuing HEAD requests without really good reasons is very obnoxious behavior. Dynamically generated web pages usually can't come up with the reply to a HEAD request short of generating the entire page and throwing away the body. Sometimes this is literally how the framework handles it (via). Issuing a HEAD and then immediately issuing a GET is making the dynamic page generator generate the page for you twice; adding an extra HEAD request is just the icing on the noxious cake.

Of course this web spider was bad in all of the usual ways. It crawled through links it was told not to use, it had no rate limiting and was willing to make multiple requests a second, and it had a User-Agent header that didn't include any URL to explain about the web spider, although at least it didn't ask me to email someone. To be specific, here is the User-Agent header it provided:

Mozilla/5.0 (X11; compatible; semantic-visions.com crawler; HTTPClient 3.1)

All of the traffic came from the IP address 144.76.198.133, which is a Hetzner IP address and currently resolved to a generic 'clients.your-server.de' name. As I write this, the IP address is listed on the CBL and thus appears in Spamhaus XBL and Zen.

(The CBL lookup for it says that it was detected and listed 17 times in past 28 days, the most recent one being at Tue Nov 12 06:45:00 2019 UTC or so. It also claims a cause of listing, but I don't really believe the CBL's one for this IP; I suspect that this web spider stumbled over the CBL's sinkhole web server somewhere and proceeded to get out its little hammer, just as it did against here.)

PS: Of course even if it was not hammering madly on web servers, this web spider would probably still be a parasite.

WebSpiderRepeatedHEADs written at 22:41:50; Add Comment

My mistake in forgetting how Apache .htaccess files are checked

Every so often I get to have a valuable learning experience about some aspect of configuring and operating Apache. Yesterday I got to re-learn that Apache .htaccess files are checked and evaluated in multiple steps, not strictly top to bottom, directive by directive. This means that certain directives can block some later directives while other later directives still work, depending on what sort of directives they are.

(This is the same as the main Apache configuration file, but it's easy to lose sight of this for various reasons, including that Apache has a complicated evaluation order.)

This sounds abstract, so let me tell you the practical story. Wandering Thoughts sits behind an Apache .htaccess file, which originally was for rewriting the directory hierarchy to a CGI-BIN but then grew to also be used for blocking various sorts of significantly undesirable things. I also have some Apache redirects to fix a few terrible mistakes in URLs that I accidentally made.

(All of this place is indeed run through a CGI-BIN in a complicated setup.)

Over time, my .htaccess grew bigger and bigger as I added new rules, almost always at the bottom of the file (more or less). Things like bad web spiders are mostly recognized and blocked through Apache rewriting rules, but I've also got a bunch of 'Deny from ..' rules because that's the easy way to block IP addresses and IP ranges.

Recently I discovered that a new rewrite-based block that I had added wasn't working. At first I thought I had some aspect of the syntax wrong, but in the process of testing I discovered that some other (rewrite-based) blocks also weren't working, although some definitely were. Specifically, early blocks in my .htaccess were working but not later ones. So I started testing block rules from top to bottom, reading through the file in the process, and came to a line in the middle:

RewriteRule ^(.*)?$ /path/to/cwiki/$1 [PT]

This is my main CGI-BIN rewrite rule, which matches everything. So of course no rewrite-based rules after it were working because the rewriting process never got to them.

You might ask why I didn't notice this earlier. Part of the answer is that not everything in my .htaccess after this line failed to take effect. I had both 'Deny from ...' and 'RedirectMatch' rules after this line, and all of those were working fine; it was only the rewrite-based rules that were failing. So every so often I had the reassuring experience of adding a new block and looking at the access logs to see it immediately rejecting an active bad source of traffic or the like.

(My fix was to move my general rewrite rule to the bottom and then put in a big comment about it, so that hopefully I don't accidentally start adding blocking rules below it again in the future.)

PS: It looks like for a while the only blocks I added below my CGI-BIN rewrite rule were 'Deny from' blocks. Then at some point I blocked a bad source by both IP address and then its (bogus) HTTP referer in a rewrite rule, and at that point the gun was pointed at my foot.

HtaccessOrderingMistake written at 01:07:37; Add Comment

(Previous 10 or go back to November 2019 at 2019/11/02)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.