Wandering Thoughts

2018-04-05

Switching over to Firefox Quantum was relatively painless

As you might have guessed from my very weak excuse in a recent entry, I've been increasingly tempted to switch my primary browser over to Firefox Quantum (from Firefox 56). Not because I knew I had to do it sometime (although that was true), but because I genuinely wanted to be running Quantum; the more I used it in various secondary environments, the more I was okay with it, and I have a tropism towards the new and shiny. Today I gave in to that temptation and switched over both at work and at home. The short summary is that it went reasonably painlessly.

There are things that aren't as good as Firefox 56; the most glaring is that there are any number of annoying places where gestures don't work any more, such as a new blank tab or the error page you get when a network connection times out (I'm used to gesturing up-down to cause a refresh in order to retry the connection). I'm also having the usual issues when Firefox's GUI moves controls that I'm extremely used to (I expect 'refresh' to be at the right side of the URL box, for example). But these are reasonably minor and tolerable (and I'll probably get used to the UI switch in time).

(Perhaps someday Mozilla will figure out a way of letting people very selectively grant more permissions to certain addons, so we can have gestures in more places.)

I don't know if I'm imagining things or not, but Firefox Quantum at least feels faster and more responsive than Firefox 56 did. Of course this is what Mozilla said people would experience, but I browse in an atypical environment that isn't bogged down by all of that JavaScript so I wasn't sure how much of the Quantum speedups would apply to me. Since some of the Firefox improvements are in things like processing CSS, I'm willing to believe that I'm seeing something real here.

(There's also that Firefox Quantum is inherently multiprocess now, whereas I was running Firefox 56 in single-process mode because not all of my addons were e10s compatible.)

While I'm glad that I finally made the switch, I'm also glad that I took so long to make it. Getting to the point where this switch was relatively painless took a bunch of experimentation, testing, research, and a certain amount of hacking. I've also benefited from all of the work that other people have done to develop and improve new Firefox Quantum addons, and the improvements in the WebExtensions API itself that have happened since Firefox 57.

(I've been building Firefox Nightly and trying out things in it for months now, and more recently I've switched various other Firefox instances and used them, starting when I accidentally let my Fedora laptop force-upgrade Firefox despite me nominally having held the Fedora package at Firefox 56.)

PS: I'll admit that I knew I was going to have to do this before too long, as uBlock Origin will be dropping support for its legacy version in early May.

PPS: The one difference from my set of Quantum addons is that I'm experimenting with just turning off media.autoplay.enabled and not installing a 'disable autoplay on Youtube' addon. This seems to work so far.

FirefoxQuantumSwitch written at 01:16:41; Add Comment

2018-04-02

I've retired my filtering HTTP proxy

I've been using a filter HTTP proxy for a very long time; the last time I looked suggested that I'd been using one for almost as long as they've existed. A couple of years ago, I wrote that it was time for me to upgrade the proxy I was using, because it had last been updated in 1998 and was stuck having only HTTP/1.0 and IPv4. In my usual way of not doing anything about pending issues as long as nothing explodes, I did nothing about the issue since that mid-2016 entry until very recently. When I did start to think about it this January, I decided to take a different course entirely, and I've now retired my filtering HTTP proxy and rely purely on in-browser protections.

Two things pushed me into realizing that this was the only sensible position. The first was realizing that any useful filter on the modern Internet was (and is) going to require frequent updates to filter rules. You can do this with a filtering proxy, but you need to find one that uses trustworthy external filtering rules, imports them regularly, and so on. This can be done, in theory, but I don't think anyone is doing it in practice as a canned thing today, and I believe that all of the good filtering rulesets are designed for in-browser usage these days (for the obvious reason that this is by far the biggest pool of users).

The second is the rapid increase in HTTPS. Back in mid 2016 I saw plenty of HTTP usage living on for a great deal of time to come, but that seems like a much less certain bet today for various reasons. HTTPS usage is certainly way up and there's no filtering HTTP proxy in existence that I would even think about allowing to do HTTPS interception. Browsers have a hard enough time doing HTTPS securely, and they have far more people working to make everything work well and safely than proxy authors ever will. If I want to do filtering for HTTPS traffic, and I do, I have to rely on my browser addons to do it. As more and more sites move to HTTPS, I'm going to have to rely on my browser addons more and more for protection.

In summary, any proxy I used would clearly only be a secondary backup for the real protection of my addons (since it wouldn't protect me from HTTPS and probably wouldn't have rules as good as my addons do). Once I realized all of this, I decided to simplify my life by not using any sort of filtering HTTP proxy, and back at the end of January I turned my old faithful Junkbusters daemon off and de-configured it from my primary Firefox. I don't think I've noticed any particular difference in my browsing, which is probably not a surprise since its filtering rules were probably last updated 20 years ago, like the rest of my Junkbuster install.

(It was throwing away HTTP cookies, but I have other solutions for that now.)

More broadly, it seems clear that the future and even present of filtering is inside the browser, primarily (for now) in browser addons. Filtering proxies are yesterday's technology, used before browsers could do this sort of thing natively. Browser addons is where all the development effort is going, which is why filtering proxy software sees less and less frequent updates (Privoxy was last updated in 2016, for example).

I expected to feel a little sad about this simply because I've run a filtering proxy for so long, but if anything I wound up feeling relieved. Junkbuster's various limitations are things I inflicted on myself voluntarily in exchange for its benefits, but I'm unsentimental about being able to do better now. Still, thanks, little program; I suspect you vastly outlived what your authors expected of you.

(I guess I am just a tiny bit sentimental about it.)

NoMoreProxy written at 00:32:58; Add Comment

2018-03-30

My current set of Firefox Quantum (57+) addons

It turns out that I use way more instances of Firefox than I really expected, between my work laptop (in Linux and Windows), the ones I maintain for Twitter (on two machines), test builds to track Firefox development, and so on. Although I'm still using Firefox 56 as my primary Firefox, I've upgraded all of these other instances to Firefox Quantum, which has caused me to converge on a more or less final set of addons that I'm going to use when I switch my primary Firefox over, which is getting increasingly tempting for various reasons (but not for the last reason; switching from NoScript to uMatrix has basically eliminated my memory issues).

(My current excuse for not switching over is that I'm waiting for this bug to be fixed.)

Partly because I keep setting up Firefox Quantum instances and I want a central reference, here's my list of current addons along with some notes on my experience with alternatives and how I configure them. I have more extensive notes on some of these addons in my previous entry on likely Quantum addons.

  • uBlock Origin is my standard block-bad-stuff extension. I turn on advanced mode, disable WebRTC, and enable uBlock's 'Annoyances' filter list.

    (I don't use the advanced mode so far, but turning it on makes it available and gives me easily available information on what the page uses and what's blocked.)

  • uMatrix is what I now use to block JavaScript and cookies (and other bad stuff). I disable showing the number of blocked resources on the icon because it tends to be too noisy (and uBlock Origin basically does that too) and I turn off spoofing <noscript> tags, spoofing the HTTP referer for third party requests, and regularly clearing the browser cache.

    (Possibly I should allow uMatrix to spoof the HTTP referer, but I have complicated feelings about this in general because of how the HTTP referer is useful to site operators.)

  • Foxy Gestures is the best replacement for FireGestures that I've found. Mozilla's 'find a replacement for your old addon' stuff recommends Gesturefy, but for me it's an inferior replacement; I don't like parts of its UI, it doesn't appear to have export and import of your changes in gesture bindings, and it doesn't allow for user custom gestures which is important to me because I hack some new WebExtensions APIs into my personal Firefox build in order to add gestures that are important to me.

  • Disable Autoplay for Youtube is the best addon I've found for this purpose; it's very close to how FlashStopper works. The one flaw I've found with it, which I suspect is generic to how WebExtensions work, is that if I restart the browser with one or more YouTube windows active, one of them will start to play for a bit as the browser starts before this addon activates and stops it. I'm going to experiment with setting Firefox's media.autoplay.enabled preference to False to see if this is a tolerable solution that doesn't stop too many things or have other undesirable side effects, and it's possible that in the end this preference will be all that I need and I don't need (or want) an addon for this.

    (I can imagine some people wanting to stop autoplay only on YT, but this isn't my situation; I don't want video to autoplay anywhere. It's just that YT is one of the few places that I have configured to play video at all.)

    I configure the addon to also stop autoplay of Youtube playlists; basically I never want Youtube to autoplay things. Sometimes the video or piece of music that I want to play on YT is part of a playlist, which makes it very irritating when YT autoplays the next one on me. I didn't come to YT to listen to the playlist, I came for one thing.

  • Cookie AutoDelete is my current replacement for Self-Destructing Cookies (which I adopted in my primary Firefox due to switching to uMatrix). I enable autocleaning, turn off showing the number of cookies for the domain and notification, and turn on cleaning Localstorage.

    (I wish Cookie AutoDelete had something similar to SDC's 'recently self-destructed cookies' information because it's reassuring to know, but genuine notifications are too obtrusive.)

  • Cookie Quick Manager is a great addon for checking in on what cookies the browser is hanging on to and to peer inside them. I installed it basically to keep an eye on Cookie AutoDelete, but I feel it's handy in general. Because of how my window manager is set up, I configure it to start in a tab.

    (I've looked at Cookie Manager but I didn't like its interface as much.)

  • Textern is my replacement for It's All Text and I like it. In my primary Firefox, I'll be sideloading a hacked version that adds a context menu item for it.

  • Open in Browser is a traditional extension that I use because some websites try to have you download things that I can perfectly well view in the browser instead (for example, some bug trackers want you to download attachments to bug reports even for things like patches or logs that I could perfectly well view in the browser).

  • My Google Search URL Fixup addon, for the obvious reason. It turns out that Don't track me Google (written by the author of Open in Browser) will also do this (and for more Google search domains), but it's a lot more heavyweight so I'm sticking with my own addon.

  • HTTPS Everywhere, basically just because.

(Some of these addons work best on the most recent version of Firefox that you can get, because they use WebExtensions APIs and the like that weren't in Firefox 57. This is especially important for Foxy Gestures, due to issues with the middle mouse button on Linux in Firefox 57. Fortunately you shouldn't be running Firefox 57 anyway. I expect and hope that Firefox's WebExtensions APIs keep improving in new releases (and I have at least one bug that I should file sometime, because about:home currently doesn't work too well in my setup).)

In general there are some limitations and irritations in the new WebExtensions world but I can basically get something equivalent to my current Firefox environment, Firefox Quantum appears to have real performance improvements, and like it or not Quantum is my future. I know I don't sound too enthused here, but I kind of am. At this point I've put Firefox Quantum through a reasonable amount of use (primarily due to Twitter) and it's left me reasonably enthused about eventually switching.

I don't bother to use all of these extensions in every Firefox instance I have (and I can't sideload my hacked Textern version in anything except my own builds, since only 'developer' versions of Firefox can load unsigned addons), but this is the full set. Possibly I should use uMatrix more widely than I currently do, since it's not too annoying to set it up to allow only Twitter to use JavaScript and cookies (for example).

FirefoxQuantumAddons written at 23:28:57; Add Comment

2018-03-21

You probably don't want to run Firefox Nightly any more

Some people like to run Firefox Nightly for various reasons; you can like seeing what's coming, or want to help Mozilla out by testing the bleeding edge, or various other things. I myself have in the past run a Firefox compiled from the development tree (although at the moment I'm still using Firefox 56). Unfortunately and sadly I must suggest that you not do that any more, and only run Firefox Nightly if you absolutely have to (for example to test some bleeding edge web feature that's available only in Nightly).

Let's start with @doublec's tweet:

Even if it is only Mozilla's nightly browser and for a short period of time I'm a bit disturbed about the possibility of an opt-out only "send all visited hostnames to a third party US company" study.
FYI: Short Nightly Shield Study involving DNS over HTTPs (DoH)

(via Davor Cubranic)

In the mozilla.dev.platform thread, it's revealed that Mozilla is planning an opt-out DNS over HTTPS study for Firefox Nightly users that will send DoH queries for all hostnames to a server implementation at Cloudflare (with some legal protections for the privacy of this information).

This by itself is not why I think you should stop running Firefox Nightly now. Instead, the reason why comes up further down the thread, in a statement by a Mozilla person which I'm going to quote from directly:

It isn't explicit right now that using nightly means opting in to participating in studies like this, and I think the text of the download page antedates our ability to do those studies. The text of the Firefox privacy page says that prerelease products "may contain different privacy characteristics" than release, but doesn't enumerate them. [...]

Let me translate this: people using Firefox Nightly have less privacy protections and less respect for user choice from Mozilla than people using Firefox releases. Mozilla feels free to do things to your browsing that they wouldn't do to users of regular Firefox (well, theoretically wouldn't do), and you're implicitly consenting to all of this just by using Nightly.

That's why you shouldn't use Nightly; you shouldn't agree to this. Using Nightly now is pasting a 'kick me' sign on your back. You can hope that Mozilla will kick carefully and for worthwhile things and that it won't hurt, but Mozilla is going to kick you. They've said so explicitly.

Unfortunately, Mozilla's wording on this on the current privacy page says that these 'different privacy characteristics' apply to all pre-release versions, not just Nightly. It's not clear to me if the 'Developer Edition' is considered a pre-release version for what Mozilla can do to it, but it probably is. Your only reasonably safe option appears to be to run a release version of Firefox.

(Perhaps Mozilla will clarify that, but I'm not holding my breath for Mozilla to take their hands out of the cookie jar.)

I don't know what this means for people building Firefox from source (especially from the development tree instead of a release). I also don't know what currently happens in any version (built from source or downloaded) if you explicitly turn off SHIELD studies. Regardless of what happens now, I wouldn't count on turning off SHIELD studies working in future Nightly versions; allowing you to opt out of such things runs counter to Mozilla's apparent goal of using Nightly users as a captive pool of test dummies.

(I don't know if I believe or accept Mozilla's views that existing users of Nightly have accepted this tiny print that says that Mozilla can dump them in opt-out privacy invasive studies, but it doesn't matter. It's clear that Mozilla has this view, and it's not like I expect Mozilla to pay any attention to people like me.)

PS: I had a grumpy Twitter reaction to this news, which I stand by. Mozilla knows this is privacy intrusive and questionable, they just don't care when it's Nightly users. There are even people in the discussion thread arguing that the ends justify the means. Whatever, I don't care any more; my expectations keep getting lowered.

PPS: I guess I'll have to periodically check about:studies and the Privacy preference for SHIELD studies, just to make sure.

FirefoxNoNightly written at 02:18:58; Add Comment

2018-03-18

Why I use Facebook (a story of web development)

The reason I actually use my Facebook account is because of my bike club. Explaining this reason reveals something sad about the state of the web and putting together web sites, and why Facebook is so attractive. I care about this because in some quarters, it's popular to be dismissive of people and groups who use Facebook despite the many problems with it, and my view is that people who have this attitude may not understand the good reasons that push people to Facebook.

(And in turn this matters because if we want to get people off Facebook, we have to understand why they're on there in the first place.)

I'd probably have a Facebook account no matter what, simply because a number of people I know use Facebook heavily and being on Facebook feels necessary to maintain connections to them. But without my bike club I probably wouldn't log on more than once every few months. What drives me to log on more often than that (at least during the biking part of the year) is that my bike club uses Facebook (in the form of a Facebook Group) as its de facto member discussion forum. Sure, our web site has a 'forum' section, but I assure you that's it's a pale shadow compared the Facebook Group. Especially, during bike riding season the Facebook Group is the place you want to look for last minute announcements about rides (whether cancellations forced by weather, route updates, or what have you).

So, why is a Facebook Group so attractive for this? The simple version of the answer is that running your own discussion forum is a pain in the rear and you don't get much for it. Facebook spends huge amounts of effort making what is in effect very good discussion forum software with a ton of advanced features, and if you use them not only can you take advantage of all of that work, you get account management for free (often using accounts that people already have). You don't have to sort through various discussion forum software to pick one, figure out if it can go on your web host, integrate it into your website, manage the configuration, worry about security issues, handle people who forget their account password or have it compromised, etc etc etc. Nor do you have to worry about storing photos (or even videos) that members may want to upload, or managing photo galleries, or a huge number of other features that Facebook gives you for free.

(One of those features is spam management, by the way, which is very important if your forum needs to be open to the public. The bike club's forum doesn't have to be (although it's handy if non-members can talk to us), but Facebook is also where various sections of Toronto's cycling activist community are active, and that has to be open to the public.)

If you're a computer savvy person doing things for a computer savvy audience, sure, maybe running your own discussion forum software is something you're willing to tackle. But we're talking about a bike club here. Our goal is not running a website, our goal is to go on bike rides; the website is sort of incidental to the process (although still crucial to it).

In short, the wonder is not that people turn to Facebook for discussion forums and similar things. The wonder is that not everyone does. Facebook is where people go to get together and to organize because it almost always beats the hell out of setting all of that up for yourself.

(I don't have any good ideas for how to avoid this, especially since all of the alternatives that sound plausibly attractive to organizations like my bike club would pretty much have to be central monoliths that you can also outsource the entire forum to, and it's far from clear that they'd be much better than Facebook. Running this sort of infrastructure is hard, especially once you take on the vital additional tasks of account management (which shouldn't just be delegated to Facebook) and anti-spam work.)

(I periodically mention a short version of this on Twitter, so today I felt like writing the long version up.)

FacebookWhyIUse written at 01:23:38; Add Comment

2018-03-11

A bad web scraper operating out of OVH IP address space

I'll start with my tweet:

I've now escalated to blocking entire OVH /16s to deal with the referer-forging web scraper that keeps hitting my techblog from OVH network space; they keep moving around too much for /24s.

I have strong views on forged HTTP referers, largely because I look at my Referer logs regularly and bogus entries destroy the usefulness of those logs. Making my logs noisy or useless is a fast and reliable way to get me to block sources from Wandering Thoughts. This particular web scraper hit a trifecta of things that annoy me about forged refers; the referers were bogus (they were for URLs that don't link to here), they were generated by a robot instead of a person, and they happened at volume.

The specific Referer URLs varied, but when I looked at them they were all for the kind of thing that might plausibly link to here; they were all real sites and often for recent blog entries (for example, one Referer URL used today was this openssl.org entry). Some of the Referers have utm_* query parameters that point to Feedburner, suggesting that they came from mining syndication feeds. This made the forged Referers more irritating, because even in small volume I couldn't dismiss them out of hand as completely implausible.

(Openssl.org is highly unlikely to link to here, but other places used as Referers were more possible.)

The target URLs here varied, but whatever software is doing this appears to be repeatedly scraping only a few pages instead of trying to spider around Wandering Thoughts. At the moment it appears to mostly be trying to scrape my recent entries, although I haven't done particularly extensive analysis. The claimed user agents vary fairly widely and cover a variety of browsers and especially of operating systems; today a single IP address claimed to be a Mac (running two different OS X versions), a Windows machine with Chrome 49, and a Linux machine (with equally implausible Chrome versions).

The specific IP addresses involved vary but they've all come from various portions of OVH network space. Initially there were few enough /24s involved in each particular OVH area that I blocked them by /24, but that stopped being enough earlier this week (when I made my tweet) and I escalated to blocking entire OVH /16s, which I will continue to do so as needed. Although this web scraper operates from multiple IP addresses, they appear to add new subnets only somewhat occasionally; my initial set of /24 blocks lasted for a fair while before they started getting through with new sources. So far this web scraper has not appeared anywhere outside of OVH, and with its Referer forging behavior I would definitely notice if it did.

(I've considered trying to block only OVH requests with Referer headers in order to be a little specific, but doing that with Apache's mod_rewrite appears likely to be annoying and it mostly wouldn't help any actual people, because their web browser would normally send Referer headers too. If there are other legitimate web spiders operating from OVH network space, well, I suggest that they relocate.)

I haven't even considered sending any report about this to OVH. Among many other issues, I doubt OVH would consider this a reason to terminate a paying customer (or to pressure a customer to terminate a sub-customer). This web scraper does not appear to be attacking me, merely sending web requests that I happen not to like.

(By 'today' I mean Saturday, which is logical today for me as I write this even if the clock has rolled past midnight.)

Sidebar: Source count information

Today saw 159 requests from 31 different IP addresses spread across 18 different /24s (and 10 different /16s). The most prolific IPs where the following ips:

 19 151.80.230.8
 15 151.80.109.121
 12 151.80.230.52
 10 94.23.60.110
  9 151.80.230.23

None of these seem to be on any prominent DNS blocklists (not that I really know what's a prominent DNS blocklist any more, but they're certainly not on the SBL, unlike some people who keep trying).

OVHBadWebScraper written at 01:49:40; Add Comment

2018-03-07

Some things I mean when I talk about 'forged HTTP referers'

One of the most reliable and often the fastest ways to get me to block people from Wandering Thoughts is to do something that causes my logs to become noisy or useless. One of those things is persistently making requests with inaccurate Referer headers, because I look at my Referer logs on a regular basis. When I talk about this, I'll often use the term 'forged' here, as in 'forged referers' or 'referer-forging web spider'.

(I've been grumpy about this for a long time.)

I have casually used the term 'inaccurate' up there, as well as the strong term 'forged'. But given that the Referer header is informational, explicitly comes with no guarantees, and is fully under the control of the client, what does that really mean? As I to use it, I tend have one of three different meanings in mind.

First, let's say what an accurate referer header is: it's when the referer header value is an honest and accurate representation of what happened. Namely, a human being was on the URL in the Referer header and clicked on a link that sent them to my page, or on the site if you only put the site in the Referer. A blank Referer header is always acceptable, as are at least some Referer headers that aren't URLs if they honestly represent what a human did to wind up on my page.

An inaccurate Referer in the broad sense is any Referer that isn't accurate. There are at least two ways for it to be inaccurate (even if it is a human action). The lesser inaccuracy is if the source URL contains a link to my page, but it doesn't actually represent how the human wound up on my page, it's just a (random) plausible value. Such referers are inaccurate now but could be accurate in another circumstances. The greater inaccuracy is if the source URL doesn't even link to my page, so it would never be possible for the Referer to be accurate. Completely bogus referers are usually more irritating than semi-bogus referers, although this is partly a taste issue (both are irritating, honestly, but one shows you're at least trying).

(I'd like better terms for these two sorts of referers; 'bogus' and 'plausible' are the best I've come up with so far.)

As noted, I will generally call both of these cases 'forged', not just 'inaccurate'. Due to my view that Referer is a human only header, I use 'forged' for basically all referers that are provided by web spiders and the like. I can imagine circumstances when I'd call Referer headers sent by a robot as merely 'inaccurate', but they'd be pretty far out and I don't think I've ever run into them.

The third case and the strongest sense of 'forged' for me is when the Referer header has clearly been selected because the web spider is up to no good. One form of this is Referer spamming (which seems to have died out these days, thankfully). Another form is when whatever is behind the requests looks like it's deliberately picking Referer values to try to evade any security precautions that might be there. A third form is when your software uses the Referer field to advertise yourself in some way, instead of leaving this to the User-Agent field (which has happened, although I don't think I've seen it recently).

(Checking for appropriate Referer values is a weak security precaution that's easy to bypass and not necessarily a good idea, but like most weak security precautions it does have the virtue of making it pretty clear when people are deliberately trying to get around it.)

PS: Similar things apply when I talk about 'forged' other fields, especially User-Agent. Roughly speaking, I'll definitely call your U-A forged if you aren't human and it misleads about what you are. If you're a real human operating a real browser, I consider it your right to use whatever U-A you want to, including completely misleading ones. Since I'm human and inconsistent, I may still call it 'forged' in casual conversation for convenience.

ForgedRefererMyMeanings written at 23:30:55; Add Comment

2018-02-20

How switching to uMatrix for JavaScript blocking has improved my web experience

I'm a long-term advocate of not running JavaScript. Over the years I've used a number of Firefox (and also Chrome) addons to do this, starting with a relatively simple one and then upgrading to NoScript. Recently I switched over to uMatrix for various reasons, which has generally been going well. When I switched, I didn't expect my experience of the modern web to really change, but to my surprise uMatrix is slowly enticing me into making it a clearly nicer experience. What's going on is that uMatrix's more fine-grained permissions model turns out to be a better fit for how JavaScript exists on the modern web.

NoScript and other similar addons have a simple global site permissions model; either you block JavaScript from site X or you allow JavaScript from site X. There are two problems with this model on the modern web. The first problem is that in practice a great deal of JavaScript is loaded from a few highly used websites, for example Cloudflare's CDN network. If you permit JavaScript from cdnjs.cloudflare.com to run on any site you visit, you could be loading almost anything on any specific site (really).

The second problem is that there are a number of big companies that extend their tendrils all over the web, while at the same time being places that you might want to visit directly (where they may either work better with their own JavaScript or outright require it). Globally permitting JavaScript from Twitter, Google, and so on on all sites opens me up to a lot of things that make me nervous, so in NoScript I never gave them that permission.

uMatrix's scoped permissions defang both versions of this pervasiveness. I can restrict Twitter's JavaScript to only working when I'm visiting Twitter itself, and I can allow JavaScript from Cloudflare's CDN only on sites where I want the effects it creates and I trust the site not to do abusive things (eg, where it's used as part of formatting math equations). Because I can contain the danger it would otherwise represent, uMatrix has been getting me to selectively enable JavaScript in a slowly growing number of places where it does improve my web browsing experience.

(I could more or less do this before in NoScript as a one-off temporary thing, but generally it wasn't quite worth it and I always had lingering concerns. uMatrix lets me set it once and leave it, and then I get to enjoy it afterward.)

PS: I'm not actually allowing JavaScript on Twitter, at least not on a permanent basis, but there are some other places that are both JavaScript-heavy and a little bit too pervasive for my tastes where I'm considering it, especially Medium.

PPS: There are some setting differences that also turn out to matter, to my surprise. If you use NoScript in a default-block setup and almost always use temporary permissions, I suggest that you tell NoScript to only reload the current tab on permission changes so that the effects of temporarily allowing something are much more contained. If I had realized how much of a difference it makes, especially with NoScript's global permissions, I would have done it years ago.

Sidebar: Cookie handling also benefits from scoped permissions

I hate Youtube's behavior of auto-playing the next video when I've watched one, because generally I'm only on YouTube to watch exactly one video. You can turn this off, but to make it stick you need to accept cookies from YouTube, which will then quietly follow you around the web anywhere someone embeds some YouTube content. uMatrix's scoped permissions let me restrict when YouTube can see those cookies to only when I'm actually on YouTube looking at a video. I can (and do) do similar things with cookies from Google Search.

(I also have Self-Destructing Cookies set to throw out YouTube's cookies every time I close down Firefox, somewhat limiting the damage of any tracking cookies. This means I have to reset the 'no auto-play' cookie every time I restart Firefox, but I only do that infrequently.)

UMatrixImprovesWeb written at 22:49:31; Add Comment

2018-02-12

Writing my first addon for Firefox wasn't too hard or annoying

I prefer the non-JavaScript version of Google search results, but they have an annoyance, which is that Google rewrites all the URLs to indirect through themselves (with tracking numbers, but that's a lesser annoyance for me than the loss of knowing what I've already read). The Firefox 56 version of NoScript magically fixed this up, but I've now switched to uMatrix.

(The pre-WebExtensions NoScript does a lot of magic, which has good and bad aspects. uMatrix is a lot more focused and regular.)

To cut a long story short, today I wrote a Firefox WebExtensions-based addon to fix this, which I have imaginatively called gsearch-urlfix. It's a pretty straightforward fix because Google embeds the original URL in their transformed URL as a query parameter, so you just pull it out and rewrite the link to it. Sane people would probably do this as a GreaseMonkey user script, but for various reasons I decided it was simpler and more interesting to write an addon.

The whole process was reasonably easy. Mozilla has very good documentation that will walk you through most of the mechanics of an addon, and it's easy enough to test-load your addon into a suitable Firefox version to work on it. The actual JavaScript to rewrite hrefs was up to me, which made me a bit worried about needing to play around with regular expressions and string manipulation and parsing URLs, but it turns out that modern Firefox-based JavaScript has functions and objects that do all of the hard work; all I had to do was glue them together correctly. I had to do a little bit of debugging because of things that I got wrong, but console.log() worked fine to give me my old standby of print based debugging.

(Credit also goes to various sources of online information, which pointed me to important portions of the MDN JavaScript and DOM documentation, eg 1, 2, 3, and 4. Now you can see why I say that I just had to connect things. Mozilla also provides a number of sample extensions, so I looked at their emoji substitution example to see what I needed to do to transform the web page when it had loaded, which turned out to be a pleasantly simple process.)

There are a couple of things about your addon manifest.json that the MDN site won't tell you directly. The first is that if you want to make your addon into an unsigned XPI and load it permanently into your developer or nightly Firefox, it must have an id attribute (see the example here and the discussion here). The second is that the matches globs for what websites your content scripts are loaded into cannot be used to match something like 'any website with .google. in it'; they're very limited. I assume that this restriction is there because matches feeds into the permissions dialog for your addon.

(It's possible to have Firefox further filter what sites your content scripts will load into, see here, but the design of the whole system insures that your content scripts can only be loaded into fewer websites than the user approved permissions for, not more. If you need to do fancy matching, or even just *.google.*, you'll probably have to ask for permission for all websites.)

This limitation is part of the reason why gsearch-urlfix currently only acts on www.google.com and www.google.ca; those are the two that I need and going further is just annoying enough that I haven't bothered (partly because I want to actually limit it to Google's sites, not have it trigger on anyone who happens to have 'google' as part of their website name). Pull requests are welcome to improve this.

I initially wasn't planning to submitting this to AMO to be officially signed so it can be installed in normal Firefox versions; among other things, doing so feels scary and probably calls for a bunch of cleanup work and polish. I may change my mind about that, if only so I can load it into standard OS-supplied versions of Firefox that I wind up using. Also, I confess that it would be nice to not have my own Firefox nag at me about the addon being unsigned, and the documentation makes the process sound not too annoying.

(This is not an addon that I imagine there's much of an audience for, but perhaps I'm wrong.)

FirefoxMyFirstAddon written at 23:59:21; Add Comment

2018-02-04

More notes on using uMatrix in Firefox 56 (in place of NoScript)

I wrote my first set of notes very early on in my usage of uMatrix, before things had really settled down and I mostly knew what I was doing it. Since then I've been refining my configuration and learning more about what works and how, and I've accumulated more stuff I want to record.

The first thing, the big thing, is that changing from NoScript to uMatrix definitely seems to have mostly solved my Firefox memory issues. My Firefox still slowly grows its memory usage over time, even with a stable set of windows, but it's doing so far less than it used to and as a result it's now basically what I consider stable. I certainly no longer have to restart it once every day or two. By itself this is a huge practical win and I'm far less low-key irritated with my Firefox setup.

(I'm not going to say that this memory growth was NoScript's fault, because it may well have been caused by some interaction between NS and my other extensions. It's also possible that my cookie blocker had something to do with it, since uMatrix also replaced it.)

It turns out that one hazard of using a browser for a long time is that you can actually forget how you have it configured. I had initial problems getting my uMatrix setup to accept cookies from some new sites I wanted to do this for (such as bugzilla.kernel.org). It turned out that I used to have Firefox's privacy settings set to refuse all cookies except ones from sites I'd specifically allowed. Naturally uMatrix itself letting cookies through wasn't doing anything when I'd told Firefox to refuse them in the first place. In the uMatrix world, I want to accept cookies in general and then let it manage them.

Well, more or less. uMatrix's approach is to accept all cookies but only let them be sent when you allow it. I decided I didn't entirely like having cookies hang around, so I've also added Self-Destructing Cookies to clean those cookies up later. SDC will also remove LocalStorage data, which I consider a positive since I definitely don't want random websites storing random amounts of things there.

(I initially felt grumpy about uMatrix's approach but have since come around to feeling that it's probably right for uMatrix, partly because of site-scoped rules. You may well have a situation where the same cookies are 'accepted' and sent out on some sites but blocked on others. uMatrix's approach isn't perfect here but it more or less allows this to happen.)

Another obvious in retrospect thing was YouTube videos embedded in other sites. Although you wouldn't know it without digging under the surface, these are in embedded iframes, so it's not enough to just allow YT's JavaScript on a site where you want them; you also need to give YT 'frame' permissions. I've chosen not to do this globally, because I kind of like just opening YT videos in another window using the link that uMatrix gives me.

I have had one annoying glitch in my home Firefox with uMatrix, but once I dug deep enough it appears that there's something unusual going on in my home Firefox 56. At first I thought it was weird network issues with Google (which I've seen before in this situation), but now I'm not sure; in any case I get a consistent NS_ERROR_FAILURE JavaScript failure deep in Google Groups' 'loaded on the fly' JS code. This is un-debuggable and un-fixable by me, but at least I have my usual option to fall back on.

('Things break mysteriously if you have an unusual configuration and even sometimes if you don't' is basically the modern web experience anyway.)

PS: A subtle benefit of using uMatrix is that it also exists for Chrome, so I can have the same interface and even use almost the same ruleset in my regular mode Chrome.

PPS: I'll have to replace Self-Destructing Cookies with something else when I someday move to Firefox Quantum, but as covered, I already have a candidate.

FirefoxUMatrixNotesII written at 01:59:09; Add Comment

(Previous 10 or go back to January 2018 at 2018/01/31)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.