Firefox, WebExtensions, and Content Security Policies
Today, Grant Taylor left some comments on entries here (eg yesterday's). As I often do for people who leave comments here who include their home page, I went and visited Taylor's home page in my usual Firefox browser, then when I was done I made my usual and automatic Foxygestures gesture to close the page. Except that this time, nothing happened (well, Firefox's right mouse button popup menu eventually came up when I released the mouse button).
(For instance, WebExtensions are not allowed to inject code into some web pages or otherwise touch them.)
It's definitely clear that Foxygestures not working on Taylor's
site is because of the site's Content Security Policy headers (I
can make it work and not work by toggling
but it's not clear why. Foxygestures is at least partly using content
scripts, which are supposed to not be subject to this issue if I'm
reading the Firefox bug correctly, but perhaps there's something
peculiar going on in what Foxygestures does in them. Firefox objects
to one part of the current Content-Security-Policy header, which
perhaps switches it to some extra-paranoid mode.
(I filed Foxygestures issue 283, if only so perhaps similar cases in the future have something to search for. There is Foxygestures issue 230, but in that the gestures still worked, the UI just had limitations.)
PS: This is where I wish for a Firefox addon that allows me to set or modify the CSP of web page(s) for debugging purposes, which I believe is at least possible in the WebExtensions API. Laboratory will do part of this but it doesn't seem to start from any existing site CSP, so the 'modifying' bit of my desires is at least awkward. mHeaderControl would let me block the CSPs of selected sites, at least in theory (I haven't tried it). It's a little bit surprising to me that you don't seem to be able to do this from within the Firefox developer tools, but perhaps the Firefox people thought this was a little bit too dangerous to provide.
Why we like HTTP Basic Authentication in Apache so much
Our web server of choice here is Apache, and when we need some sort of access control for it for people, our usual choice of method is HTTP Basic Authentication (also MDN). This is an unusual choice these days; most people use much more sophisticated and user-friendlier schemes, usually based on cookies and login forms and so on. We persist with HTTP Basic Authentication in Apache despite this because, from our perspective, it has three great advantages.
The first advantage is that it uses a username and a password that people already have, because we invariably reuse our existing Unix logins and passwords. This gets us out of more than making people remember (or note down) another login and password; it also means that we don't have to build and operate another account system (with creation, management, removal, tracking which Unix login has which web account, and so on). The follow on benefit from this is that it is very easy to put authentication restrictions on something, because we need basically no new infrastructure.
The second advantage is that because we use HTTP Basic Authentication in Apache itself, we can use it to protect anything. Apache is perfectly happy to impose authentication in front of static files, entire directory hierarchies, CGIs, or full scale web applications, whatever you want. For CGIs and full scale web applications, you can generally pass on the authenticated user name, which comes in handy for things that want that sort of information. This makes it quite easy to build a new service that needs authentication, since all of the work is done for you.
The third advantage is that when we put HTTP Basic Authentication in front of something, we don't have to trust that thing as much. This isn't just an issue of whether we trust its own authentication system (when it has one); it's also how much we want to have to trust the attack surface it exposes to unauthenticated people. When Apache requires HTTP Basic Authentication up front, there is no attack surface exposed to unauthenticated people; to even start talking to the real web app, you have to have valid login credentials. We have to trust Apache, but we were doing that already.
(Of course this does nothing to protect us from someone who can get the login credentials of a user who has access to whatever it is, but that exposure is always there.)
In an environment of sophisticated web services and web setups, there are probably ways to get all of this with something other than HTTP Basic Authentication. However, we don't have such an environment. We do not do a lot with web servers and web services, and our need for authentication is confined to things like our account request handling system, our self-serve DHCP registration portals, small CGI frontends to let people avoid the Unix command line, and various internal sysadmin services. At this modest level, the ease of Apache's Basic HTTP Authentication is very much appreciated.
Wget is not welcome here any more (sort of)
Today, someone at a large chipmaker that will go unnamed decided
(or apparently decided) that they would like their own archived
copy of Wandering Thoughts. So they did what one does
here; they got out
pointed it at the front page of the blog, and let it go. I was lucky
in a way; they started this at 18:05 EST and I coincidentally looked
at my logs around 19:25, at which point they had already made around
3,000 requests because that's what wget does when you turn it loose.
This is not the first time that people have had the bright idea to
just turn to wget to copy part or all of Wandering Thoughts (someone
else did it in early October, for example), and it will not be the
last time. However, it will be the last time they're going to be
even partially successful, because I've now blocked wget's default
I'm not doing this because I'm under any illusions that this will stop people from grabbing a copy of Wandering Thoughts, and in fact I don't care if people do that; if nothing else, there are plenty of alternatives to wget (starting with, say, curl). I'm doing this because wget's spidering options are dangerous by default. If you do the most simple, most obvious thing with wget, you flood your target site and perhaps even spill over from it to other sites. And, to be clear and in line with my general views, these unfortunate results aren't the fault of the people using wget. The people using wget to copy Wandering Thoughts are following the obvious path of least resistance, and it is not their fault that this is actually a bad idea.
(I could hope that someday wget will change its defaults so
that they're not dangerous, but given the discussion in its manual
about options like
--random-wait, I am not going to hold my breath
on that one.)
Wget is a power tool without adequate safeguards for today's web, so if you are going to use it on Wandering Thoughts, all I can do is force you to at least slow down, go out of your way a little bit, and perhaps think about what you're doing. This doesn't guarantee that people who want to use wget on Wandering Thoughts will actually set it up right so that it behaves well, but there is now at least a chance. And if they configure wget so that it works but don't make it behave well, I'm going to feel much less charitable about the situation; these people will have chosen to deliberately climb over a fence, even if it is a low fence.
As a side note, one reason that I'm willing to do this at all is that I've checked the logs here going back a reasonable amount of time and found basically no non-spidering use of wget. There is a trace amount of it and I am sorry for the people behind that trace amount, but. Please just switch to curl.
(I've considered making my wget block send a redirect to a page
that explains the situation, but that would take more energy and
more wrestling with Apache
.htaccess than I currently have.
Perhaps if it comes up a lot.)
PS: The people responsible for the October incident actually emailed me and were quite apologetic about how their wget usage had gotten away from them. That it did get away from them despite them trying to do a reasonable job shows just how sharp-edged a tool wget can be.
PPS: I'm somewhat goring my own ox with this, because I have a set of little wget-based tools and now I'm going to have to figure out what I want to do with them to keep them working on here.
Firefox's middle-click behavior on HTML links on Linux
When I wrote about my unusual use for Firefox's Private Browsing mode, I lamented in an aside that you couldn't attach custom behavior to middle-clicking links with modifier keys held down, at least on Linux. This raised an obvious question, namely what are the various behaviors of middle-clicking links on Linux with various modifier keys held down.
So here they are, for posterity, as of Firefox 63 or so:
|Middle click or Shift + middle click||Your default 'open link in' behavior, either a new tab or a new window. For me, a new window.|
|Ctrl + middle click||The alternate to your plain middle click behavior (so opening a new tab in the background for me).|
|Shift + Ctrl + middle click||Open link in a new tab and then do the reverse of your 'when you open a link in a new tab, switch to it immediately' preference.|
If you have Firefox in its default preferences, where opening links in a new tab doesn't switch to it immediately, shift + ctrl + middle click will immediately switch to the new tab. If you have Firefox set to switch to new tabs immediately, shift + ctrl + middle click opens new tabs in the background.
Firefox on Linux appears to entirely ignore both Alt and Meta (aka Super) when handling middle clicks. It probably ignores other modifiers too, but I don't have any way of generating either CapsLock or NumLock in my X setup for testing. Note that your window manager setup may attach special meaning to Alt + middle clicks in windows (or Alt + the middle mouse button in general) that preempt the click from getting to Firefox; this was the case for me until I realized and turned it off temporarily for testing.
You might also wonder about modifiers on left clicks on links. In general, it turns out that adding modifiers to a left click turns it into a middle click. There is one interesting exception, which is that Alt plus left click ignores the link and turns your click into a regular mouse click on text; this is convenient for double-clicking words in links, or single-clicking to select sub-word portions of things.
(Perhaps I knew this at one point but forgot it or demoted it to reflexive memory. There's a fair amount about basic Firefox usage that I don't really think about and don't know consciously any more.)
Sadly, I suspect that the Firefox people wouldn't be interested in letting extensions attach custom behavior to Alt + middle clicks on links (with or without other modifiers), or Meta + middle clicks. These are really the only two modifiers that could sensibly have their behavior altered or modified, but since they're already ignored, allowing extensions to interpret them might cause disruption to users who've gotten used to Firefox not caring about either when middle-clicking.
As a side note, Shift plus the scroll wheel buttons changes the scroll wheel from scrolling up and down to scrolling left and right. Ctrl plus the scroll wheel buttons is text zoom, which is probably well known (certainly I knew it). Alt plus the scroll wheel is 'go forward/back one page', which I didn't know. Shift or Meta plus any other modifiers reverts the scroll wheel to its default 'scroll up/down' behavior, and Meta plus the scroll wheel also gives you the default behavior.
PS: Modifiers don't appear to change the behavior of right clicking at all; I always get the popup menu. The same is true if your mouse has physical rocker buttons, which Firefox automatically interprets as 'go forward one page' and 'go back one page'.
Update: There's a bunch of great additional information in the comments from James, including a useful piece of information about Shift plus right click. If you're interested in this stuff, you want to read them too.
Metadata that you can't commit into a VCS is a mistake (for file based websites)
I'll start with my tweet and @rt2800pci1's (first) reply:
@thatcks: I like having a file-based blog engine, but mine does make changing the 'category' of a post somewhat painful and a bit disruptive (it re-appears in syndication feeds). Still, I'm too annoyed by my own mistakes to not do it.
@rt2800pci1: Have you thought of using extended attributes on those files to tag them categorically?
In a file based website engine, any form of metadata that you can't usefully commit into common version control systems is a mistake.
(Some people would go further and say 'for any website', but I'll stick to file based websites for now.)
Using a file's modification time as the creation date of an entry? A mistake (that I've made). Using extended attributes to store tags or categories or other information? Again, a mistake. Having a SQLite database be the master source of information for anything? A mistake. Putting important entry information into essentially opaque JSON blobs that you can't read or edit by hand? A mistake (you can commit them, but you can't do many useful things in the VCS with them, such as diffing two copies).
Basically, the master version of everything should be in human readable plain text. I will somewhat reluctantly accept YAML as sufficiently close, and probably also nicely formatted JSON, but that's about it. You can compile all of this master information into efficient binary forms (as an SQLite database or whatever), but the compiled binary form should not be the canonical master form; it should be an optimization that you can recreate on demand. Similarly, if you use any filesystem metadata (either because it's convenient or because it's necessary), it should be created or set from text-based versions of the same information.
I come to this view the hard way. DWiki uses a bunch of file metadata for various things, and this has caused me any number of problems. The specific one that led to my tweet is that both the 'category' of an entry and its Atom syndication ID are based on its path in the filesystem; if I realize I made a mistake in the category of an entry and fix it, the entry's going to appear as a new entry in my syndication feed (under its new name). However the long standing one is that DWiki uses the file modification time for an entry as when it was written, which means in practice that I can't keep DWiki pages in a VCS and leads to various other hacks (since I sometimes need to update entries).
(This issue of Atom syndication IDs has come up before.)
Using file and filesystem metadata in your file based blog or website engine has an obvious and immediate attraction; it feels neat, clever, and appropriately Unixy. It's just that, in practice, it's a mistake (for several reasons) and over the long term it will bite you on the rear.
(Not using file metadata is one of one of the things I would now do differently in a file based blog engine.)
PS: Using the filesystem as a database is also a mistake in my opinion. It doesn't entirely violate the 'everything can be committed' principle, because VCSes will capture directory hierarchy state, but it's not really in a format that they like and deal well with.
Shooting myself in the foot by cargo-culting Apache configuration bits
I spent part of today working to put Prometheus's Blackbox prober
a reverse proxy in Apache (to add TLS and some security to it).
Unlike some other pieces of Prometheus, the Blackbox exporter is
not designed for this and so its little web server generates HTML
pages with absolute urls like
/config, which doesn't
work too well when you've relocated it to be under
your reverse proxying rules. Years and years ago I would have just
been out of luck, but these days Apache has mod_proxy_html, which
can rewrite HTML to fix URLs as it flows back through your Apache
I've never used mod_proxy_html before so I did my usual thing with Apache modules when I just want to hack something together; I skimmed the official Apache documentation, decided it was confusing me, did some Internet searches for examples and discussions, and used them to put together a configuration. The result behaved weirdly. I had the apparently obvious rewrite rule of:
<Location /blackbox> ProxyHTMLEnable On [...] ProxyHTMLURLMap / /blackbox/ [...] </Location>
As I understood it, this was supposed to transform a Blackbox HTML
link of '
/config' to '
/blackbox/config', by mapping
/blackbox. Instead, what I got out was
I flailed around with various alternatives and got any of the three
following variants to work:
# Match only what's supposed to be there ProxyHTMLURLMap "^/(metrics|config|probe|logs)(.*)" "/blackbox/$1$2" [R] # Terrible hack, convert to relative URLs ProxyHTMLURLMap / ./ # This works but I don't understand why # and it has to be in *this order* ProxyHTMLURLMap /blackbox /blackbox ProxyHTMLURLMap / /blackbox/
Eventually I discovered the magic setting '
proxy_html:trace3', which gave me a report of what was theoretically
happening in the HTML rewriting process. What the logs said was
that HTML rewriting appeared to be happening twice, which at least
explained why I had wound up with a doubled
/blackbox in the URL
and why the last variant worked around it (on the second pass,
mod_proxy_html matched the do-nothing rule and stopped).
I read the official documentation again to see if I could figure out why the module was doing two passes, but it didn't have any enlightenment for me. Then, suddenly, I had a terrible suspicion. You see, I left out a little bit of my Apache configuration, a bit that I had just blindly copied from Internet sources (possibly here, but there are lots of mentions of it):
It turns out that in Apache 2.4, you don't want to set an output
filter for mod_proxy_html. Just setting '
On' is enough to get the module rewriting your HTML (presumably
it internally hooks into Apache's filtering system). If you do go ahead
and set mod_proxy_html as an output filter as well, you get the
obvious thing happening; it acts twice, and then like me you will
probably be fairly confused. Removing this setting made everything
I know that superstition is a dangerous but attractive thing, and I still fell victim to blindly copying things from the Internet rather than slowing down to try to build a configuration from the documentation itself. Next time, perhaps I'll remember to be patient.
Sidebar: The one thing that still didn't work
Fetching a URL that returns a fairly large
text/plain result works
curl but fails in browsers (including
various errors about corrupted content or an inability to uncompress
things. Various Internet searches suggest that perhaps this is a
problem with the back-end web server returning compressed content
and Apache being unhappy. I followed the suggested approach of
stopping that with:
RequestHeader unset Accept-Encoding
This seems to have worked. For our usage I don't care if all of the content here is served without compression; it's not very big and I don't actually expect us to use the Blackbox probe exporter's web thing very often.
My unusual use for Firefox's Private Browsing mode
I've written before about how I use my browsing history to keep track of what I've read, although that only partly works in this era of websites making visited links look the same as unvisited ones. However, there is a little problem with using the 'visited' status to keep track of what I've read, and that is that visiting a web page doesn't necessarily correspond with actually reading it. Specifically, sometimes I run across a potentially interesting link but I'm not sure I have the time to read it right now (or if it's actually going to be interesting). If I follow the link to check it out, it's now a visited link that's saved in my browser history, and if I close the window I've lost track of the fact that I haven't actually read the page.
For years I used various hacky workarounds to take quick peeks at pages to see if I wanted to actually read them. Recently it occurred to me that I could use Firefox's Private Browsing mode to conveniently deal with this issue. If I'm reading a page or an aggregator site in my main Firefox instance and I'm not sure about a link, I can use the context menu's 'Open Link in New Private Window' option and there I go; I can check it out without it being marked as 'visited' and thus 'read' in my history. If I decide that I want to read the page for real, I can re-open the link normally.
(Reading the whole thing in Private Browsing mode is somewhat dangerous, because any links that I follow will themselves be in Private Browsing mode and thus my 'I have read these' history status for them will be lost when I close things down. So I usually switch once I've read enough to know that I want to read this for real.)
In retrospect it's a bit amusing that it took me this long to wake up to this use of Private Browsing. Until now I've spent years ignoring the feature on the grounds that I had other browsers that I used for actual private browsing (ones with far less risk of an information leak in either direction). Even though I was carefully doing things to not record certain web page visits in my main browser's history, using Private Browsing for this never occurred to me, perhaps because I wasn't doing this because I wanted private browsing.
(I think I woke up partly as part of walking away from Chrome, which I sometimes used for such quick peeks.)
One of the gesture actions that Foxy Gestures can do is open links in new private browsing windows, although I don't have a gesture assigned to that action. However I'm not sure I use this enough to make coming up with (and memorizing) a new gesture worth it. And on that note, I wish that you could attach meanings to 'middle-clicking' links with modifier keys held down, so I could make a special variant of middle-clicking do 'open in new private browser window' instead of 'open in new regular browser window'.
(Firefox turns out to have a number of behaviors here, but I'm not sure any of them are clearly documented.)
An irritating limitation or two of addons in Firefox Quantum
It's reasonably well known that Firefox addons in Firefox Quantum (ie, WebExtensions addons) are more limited than pre-Quantum addons were. One of these limitations is the places where addons work at all. Some addons are not deeply affected by these limitations, but ones that deeply modify Firefox's UI, such as a gestures addon or an addon that adds a Vim style interface (via) are strongly affected because the limitations restriction where they can be used and thus where the UI works as you expect. In other words, where gestures work for me.
One limitation is explained directly in Foxy Gesture's Github README, so I'll just quote it:
More importantly, the mouse gestures will not work until the document body of the website you are visiting has parsed. In other words, the DOM must be at least partially parsed but content does not have to be loaded. [...] This is an inherent limitation of web extensions at the moment, because there is no API to get mouse events from browser chrome. In practice this is rarely an issue as mouse events are typically available very quickly.
This is almost always true in practice, because Firefox Quantum loads web pages very fast. Well, it loads them very fast when their web site is responding. When their web site isn't really responding, when you're sitting there with a blank page as Firefox tries to load things and you decide that you're going to give up and close the tab, then you run into this issue. I close most tabs through a mouse gesture, or at least I would like to, but when a new tab hangs during the initial page load (or sometimes during subsequent ones), my mouse gesture doesn't work and I have to turn to Ctrl-W on the keyboard or clicking the appropriate tab control.
The other big limitation of addons is that they can't act on pages that Firefox considers sensitive pages, especially including internal chrome pages. Unfortunately it turns out that a number of pages that you wouldn't expect are considered chrome pages, and these are pages that you may use all the time. Specifically, pages in Firefox's Reader mode are all considered chrome pages and off limits to addons, as are all pages that are showing PDFs using Firefox's internal PDF viewer. The Reader mode limitation is especially irritating and makes Reader mode quite a bit less attractive to me; if you're going to break my UI and not always work, I wonder what I'm really getting out of it.
(With both Reader mode and PDFs, there's no indication in the displayed URL itself that you're in some special internal Firefox chrome page context, since they display the normal URL. This is especially striking and irritating in Reader mode, at least to me.)
Two more important cases of chrome pages are Firefox's network
errors page (what you get if you leave one of those slow-loading
web pages to actually time out) and
about:blank, the completely
blank page that shows up under some circumstances. For instance,
if you open a URL in a new window or tab except that Firefox decides
that the URL should be downloaded instead of shown, you're left
(A small but irritating additional case is 'view source', which is of course another internal chrome page these days.)
I'm sure that Firefox has good internal reasons for preventing addons from injecting things into these pages, but the resulting UI glitches (where gestures suddenly stop working on some page and I have to remember that oh yeah, it's a PDF or whatever) are reasonably painful. I really wish there was some way to tell Firefox that no, really, I actually do trust Foxy Gestures that much.
(The gestures that I would like to use on all pages include general window functions like 'close tab' and 'iconify'; on PDFs, I would also like things like 'increase/decrease font size'. None of these are specific to HTML content, and the window manipulation ones are basically global.)
My Firefox addons as of Firefox '64' (the current development version)
As I write this, Firefox 62 is the current released version of Firefox and Firefox 63 is the beta version, but my primary Firefox is still a custom hacked version that I build from the development tree, so it most closely corresponds to what will be released as Firefox 64 in a couple of months. At this point I feel that I'm far enough into my Firefox Quantum era that my set of addons has more or less stabilized, especially what I consider my core set, so it's time to write them down (if only for my future reference).
On the whole I've been pleased by how well Firefox Quantum handles addons, and in particular it doesn't seem to have addon memory leaks. As I mentioned in my earlier entry on some addons I was experimenting with, this has made me much more willing to give potentially interesting addons a go. It's also made me much happier with Firefox overall, since I no longer feel like I need to restart it on a regular basis; I'm back to where I can just leave it running and running for as long as my machine is up.
My core addons, things that I consider more or less essential for my experience of Firefox, are:
- Foxy Gestures
(Github) is the
best gestures extension I've found for Quantum. It's better than
the usually recommended Gesturefy for reasons that I covered in
my early entry on Quantum addons. Gestures
have become a pretty crucial part of my Firefox experience and I
really notice the places in Quantum where they don't work, which
is more places than I expected. But that's another entry.
(I use some custom gestures in my Foxy Gestures configuration that go with some custom hacks to my Firefox to add support for things like 'view page in no style' as part of the WebExtensions API.)
- uBlock Origin (Github) is my standard 'block ads
and other bad stuff' extension, and also what I use for selectively
removing annoying elements of pages (like floating headers and
(Github) is my primary tool
cookies as far as I know, and in any case uMatrix gives me finer
- Cookie AutoDelete
deals with the small issue that uMatrix doesn't actually block
cookies, it just doesn't hand them back to websites. This is
probably what you want in uMatrix's model of the world (see my
entry on this for more details), but
I don't want a clutter of cookies lingering around, so I use
Cookie AutoDelete to get rid of them under controlled circumstances.
(However unaesthetic it is, I think that the combination of uMatrix and Cookie AutoDelete is necessary to deal with cookies on the modern web. You need something to patrol around and delete any cookies that people have somehow managed to sneak in.)
- My Google Search URL Fixup for reasons covered in my writeup of creating it.
Additional fairly important addons that would change my experience if they weren't there:
(Github) gives me the ability
to edit textareas in a real editor. I use it all the time when
writing comments here on Wandering Thoughts, but not
as much as I expected on other places, partly because increasingly
people want you to write things with all of the text of a paragraph
run together in one line. Textern only works on Unix (or maybe
just Linux) and setting it up takes a bit of work because of how
it starts an editor (see this entry),
but it works pretty smoothly for me.
(I've changed its key sequence to Ctrl+Alt+E, because the original Ctrl+Shift+E no longer works great on Linux Firefox; see issue #30. Textern itself shifted to Ctrl+Shift+D in recent versions.)
- Open in Browser
(Github) allows me
to (sometimes) override Firefox's decision to save files so that
I see them in the browser instead. I mostly use this for some
PDFs and some text files. Sadly its UI isn't as good and smooth
as it was in pre-Quantum Firefox.
- Cookie Quick Manager (Github) allows me to inspect, manipulate, save, and reload cookies and sets of cookies. This is kind of handy every so often, especially saving and reloading cookies.
The remaining addons I use I consider useful or nice, but not all that important on the large scale of things. I could lose them without entirely noticing the difference in my Firefox:
- Certainly Something
(Github) is my
TLS certificate viewer of choice. I occasionally want to know the
information it shows me, especially for our own sites.
- Make Medium Readable Again
(also, Github) handles a bunch of annoyances for
Medium-hosted stuff. Some of these just automate things that I could
zap by hand with uBlock Origin and some of these only apply when I turn
- Link Cleaner cleans
the utm_ fragments and so on out of URLs when I follow links. It's
okay; I mostly don't notice it and I appreciate the cleaner URLs.
(It also prevents some degree of information leakage to the target website about where I found their link, but I don't really care about that. I'm still sending
Refererheaders, after all.)
- HTTPS Everywhere, basically just because. But in a web world where more and more sites are moving to using things like HSTS, I'm not sure HTTPS Everywhere is all that important any more.
I'm no longer using any sort of addon to stop Youtube and other media from autoplaying. These days, that's mostly covered by Firefox's native media autoplay settings, although I have to add a hack to my personal build so that isolated video documents with no audio don't get to autoplay on their own. I'm happy with this shift for various reasons.
Twelve addons is a significant increase on what I've historically used, but everything seems to go okay so far. At the moment I'm not tempted to add any more additional addons, although some people would throw in major ones like Greasemonkey or Stylus. I've used Stylish in the past, but these days uBlock Origin's element zapping covers basically everything I care about there.
(More commentary on these addons and alternatives is in this early entry on Quantum addons and then this entry on more addons that I was experimenting with. All of those then-experimental addons have been promoted to ones that I'm keeping, especially Certainly Something.)
PS: These days I keep copies of the Github or other repos of all of the important addons that I use for various reasons, including as a guard against what could euphemistically be called 'supply chain attacks' on addons.
Walking away from Google Chrome
In the recently released Chrome 69, Google made a significant change to Chrome's behavior; logging into a Google site automatically logs you into Chrome itself under that identity, leaving you very close to having Chrome sync your local Chrome data to Google whether or not you really want it to. A number of people are very unhappy about this; see, for example, Chrome is a Google Service that happens to include a Browser Engine (via) and Why I’m done with Chrome (via).
new scripts to make invoking it as convenient as my existing
script. My early experience is positive, and in fact the experience is
clearly better than Chrome in two respects. First, I don't have my
Chrome cut and paste irritation. Second,
Firefox will offer to save website passwords for me in this profile;
incognito Chrome quite reasonably never saves passwords on its own, so I
always had to set them up by logging in once in regular Chrome.
(If I was really determined about this shift, I would change my
of incognito Chrome. I'm not quite there yet.)
I'm under no illusions that Google will even notice my departure from the Chrome fold, especially since I use Chrome on Linux (which is already a tiny OS for Chrome usage). But it makes me happier to walk away from Chrome here, and I even seem to be improving my browsing life in various small ways.
(This elaborates on some tweets of mine.)
Sidebar: How I want to set up Firefox to discard cookies and history
(Perhaps Firefox's private browsing would remember passwords if I set a master password, because that option is not greyed out, but in practice I don't do that for reasons beyond the scope of this entry.)