Wandering Thoughts


It's possible for Firefox to forget about:config preferences you've set

Firefox has a user preferences system, exposed through its 'Settings' or 'Preferences' system (also known as about:preferences) and also through the more low-level configuration editor (aka about:config). As is mentioned there and covered in somewhat more detail in what information is in your profile, these configuration settings (and also your preferences settings) are stored in your profile's prefs.js file.

You might think that once you manually set something in about:config, your setting will be in prefs.js for all time until you go back into about:config and change or reset it. However, there's a way that Firefox can quietly drop your setting. If you've set something in about:config and your setting later becomes Firefox's default, Firefox will normally omit your manual setting from your prefs.js at some point. For example, if you manually enable HTTP/3 by setting network.http.http3.enabled to true and then Firefox later makes enabling HTTP/3 the default (as it plans to), your prefs.js will wind up with no setting for it.

(You can guess that this is going to happen because Firefox will un-bold an about:config value that you manually change (back) to its initial default value. There's no UI in about:config for a preference that you've manually set to the same value as the default.)

For the most part this is what you want. It certainly acts to clean up old settings that are now no longer necessary so your prefs.js doesn't explode. However it can be confusing in one situation, which is if Firefox later changes its mind about the default. Going back to the HTTP/3 situation, if Mozilla decides that turning on HTTP/3 was actually a mistake and defaults it to off again, your Firefox will wind up with HTTP/3 off even though you explicitly enabled it. In some circumstances this can be confusing; you may remember that you explicitly turned HTTP/3 on, so why is it off now?

HTTP/3 is a big ticket item so you might have heard about Mozilla going back and forth, but Mozilla also changes the defaults for lots of other preferences over time. For instance, I've tweaked my media autoplay preferences repeatedly over time, and I suspect I've had Firefox updates default to some of them (removing my prefs.js settings) and then possibly change later.

If you have any settings that are really important to always be there, I think you may be able to manually create a user.js with them. Otherwise, this is mostly something to remember if you ever wind up wondering how something you remember explicitly setting has changed.

PS: To be clear, I think that Firefox is making a sensible decision (and probably the right decision) in not having a special state for 'manually set but to the same value as the default'. That would need a more complicated UI and more code for something that we almost never care about.

FirefoxVanishingPrefs written at 00:07:30; Add Comment


Firefox's slow takeover of the address bar's space

In the current Firefox 88, and I believe in the next version as well (currently Firefox Beta), part of the address bar is a '...' menu for "Page actions". Through using the right button on items in this menu, or on the icons on the right side of the address bar, you can add or remove certain icons from the right side, things like the "Bookmark this page" star. If you start up a current Firefox Nightly, you will discover the three dots of the Page Actions menu are gone, as is your ability to remove any icons from the address bar, including both the "Bookmark this page" star and any that may be put there by some of your addons.

Some of the address bar icons have always been non-removable. You can't get rid of "Toggle reader mode" on the right or any of the left side icons (usually the trackers information icon and the site information icon, sometimes with others). But the total removal of "Page actions" is new, and I believe it leaves you with no access to actions like "Copy link" (although you can achieve them in multi-step actions), and of course it leaves you with no way to quiet down addons that normally put informational icons in the address bar. Perhaps the Mozilla developers intend the current state of affairs as an intermediate step on the way to a future state, but I can't help but think that this is actually the final state. It's not surprising and in a way it's inevitable.

The space for URLs in the address bar has always consumed a significant amount of valuable horizontal space in the toolbar. If you want to add more icons to the general toolbar, they're going to wind up stealing space from the URL in one way or another. At one point, this might have been considered unacceptable, but those days are long gone. I can think of several reasons for the fall of saving space for the URL:

  • Many URLs themselves got longer and longer, so that you couldn't see anything like the full URL in any reasonable width browser window. Much of a typical URL also got meaningless to a typical person.

  • Most people don't (and didn't) use the URLs even when they're visible, as demonstrated by various things. I don't think that this started with smartphones, but they certainly demonstrated that you didn't have to see the URL (because they just couldn't show more than a tiny scrap of it to you).

  • On desktops there's been a move toward using applications in full screen mode, which appears to leave lots of room for URLs in the address bar even if the browser developers add icons to it.

The increasing trend of websites to require wider and wider windows to look reasonable on desktops strongly suggests to me that I now use an unusually narrow browser window by modern tacit standards. This narrow window leaves me with not very much room for the actual URL after Firefox Nightly forces all of these icons down my throat; even with the "Page actions" menu, I don't have enough room for the full URLs of Wandering Thoughts articles, which is sort of ironic.

(Since I literally never use Firefox bookmarks I feel rather disgruntled that I'm forced to make space for what is effectively a land mine in my browsers. I can take the button out in my personal build, but some of my Firefox usage is of the standard version.)

My impression is also that extensive customization of Firefox is now out of fashion within Mozilla. The "Page actions" menu and the related ability to customize what did and didn't appear in the address bar are what I would call old-style Firefox, and don't fit in the new approach, where the Firefox UI designers seem to want to give everyone the same experience.

(This elaborates on some tweets of mine.)

FirefoxUrlbarTakeover written at 00:28:56; Add Comment


A Firefox surprise from disabling dom.event.clipboardevents.enabled

One of the things that browsers allow sites to do with JavaScript is to intercept and manipulate attempts to copy text out of them or paste text into them. Websites use this to do obnoxious things like stop you pasting email addresses and passwords into their forms or mangling the text you copy out (via, and this is potentially a security risk since what you think you're going to paste is not necessarily what you actually get). When I heard about this at some point (I'm not sure when, but it was no later than mid 2016), I went into about:config in all of my Firefox instances and disabled the dom.event.clipboardevents.enabled preference, which makes Firefox ignore Javascript attempts to interfere with cut and paste. Everything went along fine for years and years, with no visible downside, and I completely forgot about it.

(According to some searching of MDN, this controls HTMLElement.oncopy, HTMLElement.onpaste, and HTMLElement.oncut.)

Recently I wanted to copy some Grafana panels from one Grafana server to another, which in the modern web application way you do by viewing the JSON that defines the panel, copying it to the clipboard, going to the tab with your dashboard on the other server, and pasting in the JSON to overwrite the configuration of some victim panel. Old versions of Grafana had a handy 'copy to clipboard' button that did this for you, but in 7.5 you have to do it by hand. the menu item). Whenever I did this, I got mangled text, with most of the JSON elided and replaced with a '…' Unicode character.

After help from several people and experimentation with a completely clean Firefox profile, I narrowed this down to having dom.event.clipboardevents.enabled disabled. I don't know exactly what Grafana is doing here with its HTML, DOM, CSS, and JavaScript, but apparently it absolutely has to post-process the text when it's copied or you get something that isn't even the text being currently displayed on the screen, much less the full JSON that you need.

In light of this glitch (and because working with Grafana is somewhat important for me), I've reverted this preference to its default enabled state. In my main Firefox, this is pretty harmless because I have JavaScript almost entirely disabled through uMatrix, so websites can't intercept my cut and paste in the first place. In the Firefox profile I have to run all the sites that need JavaScript, I will have to hope that I don't run into any that refuse to let me paste or copy text; if I do, I will have to temporarily toggle the preference again. Hopefully the increase in password managers has made websites less silly about pasting things into form fields.

(Via this article I found a suggestion of the Luminous addon, but brief experimentation suggests that something like Grafana is completely beyond the ability of an addon like this to do anything sensible with.)

(This elaborates on some tweets. I'm not sure I'd know about Ctrl-A and Ctrl-C before, although I really should have; they're right there in Firefox's 'Edit' menu. Which I almost never look at.)

FirefoxClipboardeventsIssue written at 23:42:42; Add Comment


Learning about the idea of the HTTP self-post

Suppose that you're on website B and it wants to send a large chunk of information to some endpoint on website A, through your browser. If this was a small amount of information, website B could use a HTTP redirect with the information stuffed into one or more query parameters. However, HTTP GET query parameters have a size limit and sometimes you have an especially verbose chunk of information to transfer. Enter what I've now seen called the "HTTP self-post". In a HTTP self-post, website B serves your browser a HTML page with a pre-filled POST form pointing to the endpoint on website A, and adds some Javascript to the page to immediately submit the form when the page loads.

(The page's HTML can have a little message telling you to 'Submit' things in case you have Javascript off.)

In one view, this is pretty close to cross-site request forgery and can only be told apart from it based on intent, which the browser can't really observe. In another view, this is essentially a POST based HTTP redirect, done through HTML and Javascript only because HTTP explicitly has no such thing (for good reasons). Of course, both views can be true at once.

You might ask, as I did when I read about this today, why website B doesn't make its own POST request to website A's endpoint instead of relaying its block of information through your browser. One reason is that website B may not be able to communicate directly with website A. Website B might be your corporate single sign on portal while website A is an internal web server inside your group's internal networks.

Another reason is that your browser may have to go back to website A anyway and coordinate with the information that website B is delivering to A. Again, consider SSO. Your browser has to return to website A to use it (and perhaps to be authorized), but website A needs to have the information from website B before it can do anything with you. It's simpler and more reliable to bolt these two things together, so that either you wind up on website A with it having all of the information necessary to move forward (in theory) or you go nowhere.

(In this scenario you visited website A as an unidentified and unauthenticated person and it sent you to website B to get identified. Your browser is the only thing guaranteed to be able to reach both websites, because if you can't reach one or the other nothing works.)

Hopefully I will never need to build a web system that uses HTTP self-post(ing), but at least now my ideas of how things can be done with HTTP have been expanded a bit. And if I do build such a system, I will have to carefully consider how to shield it from CSRF, since the two are so close to each other.

(This elaborates on a tweet of mine, which was written while I was slowly reading my way through the mod_auth_mellon user guide.)

HTTPSelfPostWhatIs written at 23:45:13; Add Comment


HTTPS is really multiple protocols these days

For all of its warts, HTTP is essentially a single protocol; if you see 'http://...', you know what pretty much anything will do with it, and they're all going to do about the same thing. This is not the case for HTTPS. HTTPS looks like a single protocol, invoked by 'https://....', but it's really a bunch of protocols all dumped in a big sack labeled 'HTTPS' on the outside. The actual protocol that clients will use for HTTPS URLs can vary widely.

Some of these protocols are closely related to each other, like the HTTPS protocols that are all some version of TLS over TCP with plain HTTP/1.1 spoken over the encrypted channel. Even there there's divergence between clients; your old program may talk TLS 1.1, my somewhat more recent one is TLS 1.2, and web browsers are TLS 1.3. Others are not as related; there is HTTP/2, which still uses TLS over TCP as a transport protocol but has a quite different thing inside the encrypted channel. And lately there is the pretty divergent HTTP/3 (also known as HTTP3), which changes the low level transport protocol to QUIC, which means that its traffic uses UDP instead of TCP.

(Using UDP instead of TCP matters because there's a host of differences in how IP networks handle them.)

What this means in concrete terms is that if you put a https: URL into two browsers or more generally two clients, how they talk to the server may be completely different. Since they're different, especially HTTP/3, one way may work and the other way may fail. Naturally this complicates troubleshooting, especially since most browsers and programs don't tell you what sort of HTTPS they're actually using.

(If you're not up on the latest web developments, it may not even occur to you that there are multiple types of 'HTTPS' out there.)

There's a straightforward but unappetizing reason for this, which is that there's essentially no chance of HTML having an additional URL scheme such as 'http3'. Since we're stuck in a world where the only two URL schemes are 'http' and 'https', everything must be called 'https' regardless of what its actual protocol is (provided that it's encrypted with something that's at least called TLS).

(For similar reasons, all future server certificate and public key based encryption schemes used for web traffic are likely to be called 'TLS'. Sadly, they will probably have to look like it from the outside because of an excess of middleware boxes that think they know what TLS handshakes look like and reject anything that's too divergent.)

HTTPSMultipleProtocols written at 22:02:27; Add Comment


Safari is now probably the influential wild card browser for user privacy

Today, Chrome is by far the dominant web browser, which gives it significant influence and weight, including in the area of user privacy. But Chrome is beholden to Google and Google is beholden to the torrents of money that pour in from intrusive Internet advertising and the associated consumer surveillance business. This means that there are limits on what Chrome will do; for instance it's probably not likely to be aggressive about not sending Referer headers. In general, Chrome's support of user privacy will always be limited and conditional.

Firefox isn't beholden to Google (at least not in the same way), but sadly its overall usage is relatively low. Firefox still matters, for various reasons, but its influence is probably more moral than concrete at this point. People may be swayed by what Firefox does, including in the area of user privacy, but with low usage they're probably not directly affected by it. Inevitably Firefox generally has to wield its remaining influence carefully, and radical moves to help user privacy don't actually help all that many people; not all that many people use Firefox, and websites probably won't change much to accommodate them.

(Such moves help me to some extent, but I'm already taking extensive steps there that go well beyond any browser's normal behavior.)

Safari definitely isn't beholden to Google, and it has enough usage to matter. Partly this is because of absolute numbers, but partly it's because Safari is the browser for what is generally an important and valuable market segment, namely iPhone users (sure, and iPads). If Safari does something and your website doesn't go along, you may have just entirely lost the iPhone market, which is generally seen as more willing to spend money (and more upscale) than Android users. This is true in general but especially true in user privacy; Apple has a brand somewhat built on that and it has less business that's affected by being strict on it (especially in the browser as opposed to apps).

If Apple decides to have Safari do something significant for user privacy, it will affect a significant number of people in a valuable market segment. My guess is that this gives it outsized influence and makes it the wild card in this area. If Safari became aggressive about not sending Referer headers, for example, it probably becomes much more likely that Chrome will grumble and follow along in some way.

(Conversely, if Safari refuses to implement some alleged 'feature', it becomes much less useful even if Chrome does implement it.)

SafariUserPrivacyWildcard written at 00:16:59; Add Comment


The fading HTTP Referer header and (Google) Search paywall bypasses

There are a lot of newspaper and other media places that have a general paywall (where you must be a subscriber to see their content), but also make an exception to this paywall if the visitor is coming directly from an Internet search (or at least a Google search; I don't know if these places will let visitors from other search engines in). Today, how these places generally know that you're coming from an Internet search is the HTTP Referer header that your browser puts on your request when you follow the link from the search page. That would be the same HTTP Referer header that's fading away, fundamentally because browsers don't like it.

Today it occurred to me that this creates some interesting issues both for Internet search engines and for places currently using a permeable paywall. For Internet search engines, it probably makes them willing to set an explicit Referrer-Policy header when browsers start defaulting to something inconvenient for them (Google Search and Bing don't appear to do this today). Search engines get a variety of things out of visibly sending traffic to media sites, or really out of visibly sending traffic to anywhere, so they have an incentive to make this traffic visible.

(I'd say that search engines have additional options if browsers refuse to cooperate, but the company behind the dominant Internet search engine is also the company behind the dominant Internet browser, so there's no risk of that. Chrome is going to do what is useful for Google, including sending Referer headers.)

If Internet search engines keep sending Referer headers, then places with permeable paywalls may not need to do anything. In theory, as the default Referrer-Policy changes, such places will need to live without knowing the terms that their visitors searched for. In practice I suspect that they already see a lot of visits with only the origin (the website, eg 'https://google.com') and no search terms, because that's what I do here. If for some reason the Internet search engines stop sending Referer entirely and won't do anything else to signal the origin, such as adding special query parameters to the URL, then who knows. Perhaps the paywalls will get less permeable than they are today. If nothing else, such a development would be interesting to see.

(Before I started thinking through what search engines would likely do, I guessed that publishers would wind up with a real problem. Now I think they probably will be able to just keep on as usual, because search engines are likely to take steps to keep things running as normal.)

RefererAndSearchPaywallBypass written at 00:11:54; Add Comment


Wrangling HTTP and HTTPS versions of the same Apache virtual host

I recently tweeted:

It has been '0' days since I have been bitten by the fact that in Apache you usually have separate configurations for the HTTP and the HTTPS version of a site even if you want them the same, so you can change the wrong one when testing and be confused when a fix doesn't work.

(This was part of discovering how not to use RewriteRule in a reverse proxy.)

We have a bunch of named virtual hosts that we have set up for people; many although not all of them are reverse proxies to user run web servers. Increasingly people ask for (and we provide) both a HTTP and a HTTPS version of the site. The most natural and least complicated way to do this is to have a big Apache configuration file with all of our named vhosts and to repeat the configuration for a host between the HTTP and HTTPS versions.

This tends to look something like:

<VirtualHost 128.100.X.XX:80>
  ServerName ahost.cs.<etc>

  [all of ahost's configuration]

<VirtualHost 128.100.X.XX:443>
  ServerName ahost.cs.<etc>
  SSLEngine on
  [TLS certificate settings]

  [all of ahost's configuration]

When you do this and you have some RewriteRule settings or something complex that you're trying to fix, it's possible to accidentally change the HTTPS version of the site but not the HTTP version and then test against the HTTP version. Or vice versa. This is made easier when the site's configuration is big enough to push the other version of the site off your editor screen.

One answer is that we're shooting both ourselves and other people in the foot by even having a meaningful HTTP site here. If people ask for a HTTPS site, we should default to making their HTTP site redirect everything to the HTTPS one (and probably the HTTPS one should set a HSTS header). This would make the mistake impossible to commit, because the only site with real configuration would be the HTTPS one.

Failing that, what I would like is some way to have the same block apply to both the HTTP and HTTPS virtual host. In my imagination, it would look something like:

<VirtualHost 128.100.X.XX:80 128.100.X.XX:443>
  ServerName ahost.cs.<etc>

  <IfPort :443>
    SSLEngine on
    [TLS certificate settings]

  [all of ahost's configuration]

(Apache 2.4 has an <If> directive, but my understanding is that it takes effect too late to do this.)

One possible answer is to put the vhost's real configuration in a separate file that is Include'd in both. Unfortunately I don't think this scales for us; we don't like splitting things up this way in general, and we have enough virtual hosts where the HTTP and HTTPS versions are different that they would be exceptions in this scheme in one way or another. We'd prefer to keep everything in one file where it's at least readily visible all at once.

Another possible answer is mod_macro (first brought to my attention by @ch2500). Unfortunately as far as I can see this involves adding an extra configuration block for each such virtual host (for the <Macro> definition), and makes the actual HTTP and HTTPS <VirtualHost> blocks more magical and harder to read. I'm not all that enthused and my co-workers would almost certainly reject this as too much magic even if I proposed it.

(Some but not all of our virtual hosts could be done with a common <Macro> template, but that would create even more magic.)

ApacheVhostHTTPAndHTTPS written at 23:24:38; Add Comment


How not to use Apache's RewriteRule directive in a reverse proxy

Recently we needed to set up a reverse proxy (to one of our user run web servers) that supported WebSocket for a socket.io based user application. Modern versions of Apache have a mod_proxy_wstunnel module for this, and you can find various Apache configuration instructions for how to use it on places like Stackoverflow. The other day I shot my foot off by not following these instructions exactly.

What I wrote was a configuration stanza that looked like this:

RewriteCond %{REQUEST_URI} ^/socket.io [NC]
RewriteCond %{QUERY_STRING} transport=websocket [NC]
# This is where my mistake is:
RewriteRule (.*) "ws://ourhost:port/$1" [P,L]

During some debugging I discovered that this was causing our main Apache server to make requests to the backend server that looked like 'GET //socket.io/.... HTTP/1.1'. The user's application was very unhappy with the leading double slash, as well it might be.

This is my old friend how not to use ProxyPass back in another form. The problem is that we aren't matching the leading slashes between the original path and the proxied path; we're taking the entire path of the request (with its leading /) and putting it on after another slash. The correct version of the RewriteRule, as the Apache documentation will show you, is:

RewriteRule ^/?(.*) "ws://ourhost:port/$1" [P,L]

In my example the '?' in the regular expression pattern is unnecessary since this rewrite rule can't trigger unless the request has a leading slash, but the mod_proxy_wstunnel version doesn't require such a match in its rewrite conditions. On the other hand, I'm not sure I want to enable 'GET socket.io/...' to actually work; all paths in GET requests should start with a slash.

PS: This is a ws: reverse proxy instead of a wss: reverse proxy because we don't support TLS certificates for people running user run web servers (they would be quite difficult to provide and manage). The virtual host that is reverse proxied to a user run web server can support HTTPS, and the communication between the main web server and the user run web server happens over our secure server room network.

Sidebar: How I think I made this mistake

We initially tried to get this person's reverse proxied environment working inside a <Location> block for their personal home page on our main server, where the public path was something like '/~user/thing/'. In this situation I believe that what would be the extra leading slash has already been removed by Apache's general matching, and so the first pattern would have worked. For various reasons we then shifted them over to a dedicated virtual host, with no <Location> block, and so suddenly the '(.*)' pattern was now scooping up the leading / after all.

ApacheProxyRewriteRule written at 23:57:44; Add Comment


My Firefox addons as of Firefox 86 (and the current development version)

I was recently reminded that my most recent entry on what Firefox addons I use is now a bit over a year old. Firefox has had 14 releases since then and it feels the start of January 2020 was an entirely different age, but my Firefox addons have barely changed in the year and a bit since that entry. Since they have updated a very small amount, I'll repeat the whole list just so I have it in one spot for the next time around.

My core addons, things that I consider more or less essential for my experience of Firefox, are:

  • Foxy Gestures (Github) is probably still the best gestures extension for me for modern versions of Firefox (but I can't say for sure, because I no longer investigate alternatives).

    (I use some custom gestures in my Foxy Gestures configuration that go with some custom hacks to my Firefox to add support for things like 'view page in no style' as part of the WebExtensions API.)

  • uBlock Origin (Github) is my standard 'block ads and other bad stuff' extension, and also what I use for selectively removing annoying elements of pages (like floating headers and footers).

  • uMatrix (Github) is my primary tool for blocking Javascript and cookies. uBlock Origin could handle the Javascript, but not really the cookies as far as I know, and in any case uMatrix gives me finer control over Javascript which I think is a better fit with how the web does Javascript today.

  • Cookie AutoDelete (Github) deals with the small issue that uMatrix doesn't actually block cookies, it just doesn't hand them back to websites. This is probably what you want in uMatrix's model of the world (see my entry on this for more details), but I don't want a clutter of cookies lingering around, so I use Cookie AutoDelete to get rid of them under controlled circumstances.

    (However unaesthetic it is, I think that the combination of uMatrix and Cookie AutoDelete is necessary to deal with cookies on the modern web. You need something to patrol around and delete any cookies that people have somehow managed to sneak in.)

  • Stylus (Github) has become necessary for me after Google changed their non-Javascript search results page to basically be their Javascript search results without Javascript, instead of the much nicer and more useful old version. I use Stylus to stop search results escaping off the right side of my browser window.

Additional fairly important addons that would change my experience if they weren't there:

  • Textern (Github) gives me the ability to edit textareas in a real editor. I use it all the time when writing comments here on Wandering Thoughts, but not as much as I expected on other places, partly because increasingly people want you to write things with all of the text of a paragraph run together in one line. Textern only works on Unix (or maybe just Linux) and setting it up takes a bit of work because of how it starts an editor (see this entry), but it works pretty smoothly for me.

    (I've changed its key sequence to Ctrl+Alt+E, because the original Ctrl+Shift+E no longer works great on Linux Firefox; see issue #30. Textern itself shifted to Ctrl+Shift+D in recent versions.)

  • Cookie Quick Manager (Github) allows me to inspect, manipulate, save, and reload cookies and sets of cookies. This is kind of handy every so often, especially saving and reloading cookies.

The remaining addons I use I consider useful or nice, but not all that important on the large scale of things. I could lose them without entirely noticing the difference in my Firefox:

  • Open in Browser (Github) allows me to (sometimes) override Firefox's decision to save files so that I see them in the browser instead. I mostly use this for some PDFs and some text files. Sadly its UI isn't as good and smooth as it was in pre-Quantum Firefox.

    (I think my use of Open in Browser is fading away. Most PDFs and other things naturally open in the browser these days, perhaps because web sites have gotten grumpy feedback over forcing you to download them.)

  • Certainly Something (Github) is my TLS certificate viewer of choice. I occasionally want to know the information it shows me, especially for our own sites. The current Firefox certificate information display is almost as good as Certainly Something, but it's much less convenient to get to.

  • HTTP/2 Indicator (Github) does what it says; it provides a little indicator as to whether HTTP/2 was active for the top-level page.

  • ClearURLs (GitLab) is my current replacement for Link Cleaner after the latter stopped being updated. It cleans various tracking elements from URLs, like those 'utm_*' query parameters that you see in various places. These things are a plague on the web so I'm glad to do my little bit to get rid of them.

  • HTTPS Everywhere, basically just because. But in a web world where more and more sites are moving to using things like HSTS, I'm not sure HTTPS Everywhere is all that important any more.

As I've done for a long time now, I actually use the latest beta versions of uBlock Origin and uMatrix. I didn't have any specific reason for switching to them way back when; I think I wanted to give back a bit by theoretically testing beta versions. In practice I've never noticed any problems or issues.

I have some Firefox profiles that are for when I want to use Javascript (they actually use the official Mozilla Linux Firefox release these days, which I recently updated to Firefox 86). In these profiles, I also use Decentraleyes (also), which is a local CDN emulation so that less of my traffic is visible to CDN operators. I don't use it in my main Firefox because I'm not certain how it interacts with me blocking (most) Javascript setup, and also much of what's fetched from CDNs is Javascript, which obviously isn't applicable to me.

(There are somewhat scary directions in the Decentraleyes wiki on making it work with uMatrix. I opted to skip them entirely.)

Firefox86Addons written at 23:27:03; Add Comment

(Previous 10 or go back to February 2021 at 2021/02/25)

Page tools: See As Normal.
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.