In praise of uBlock Origin's new 'element zapper' feature
The purpose of the element zapper is to quickly deal with the removal of nuisance elements on a page without having to create one or more filters.
uBlock Origin has always allowed you to permanently block page elements, and a while back I started using it aggressively to deal with the annoyances of modern websites. This is fine and works nicely, but it takes work. I have to carefully pick out what I want to target, maybe edit the CSS selector uBlock Origin has found, preview what I'm actually going to be blocking, and then I have a new permanent rule cluttering up my filters (and probably slightly growing Firefox's memory usage). This work is worth it for things that I'm going to visit regularly, but some combination of the amount of work required and the fact that I'd be picking up a new permanent rule made me not do it for pages I was basically just visiting once. And usually things weren't all that annoying.
Enter Medium and their obnoxious floating sharing bar at the
bottom of pages.
These things can be blocked on Medium's website itself with a
straightforward rule, but the problem is that tons of people use
Medium with custom domains. For example, this article
that I linked to in a recent entry. These days it seems like
every fourth article I read is on some Medium-based site (I exaggerate,
but), and each of them have the Medium sharing bar, and each of
them needs a new site-specific blocking rule unless I want to
globally block all <divs> with the class
Medium changes the name).
(Globally blocking such a <div> is getting really tempting, though. Medium feels like a plague at this point.)
The element zapper feature deals with this with no fuss or muss. If I wind up reading something on yet another site that's using Medium and has their floating bar, I can zap it away in seconds The same is true of any number of floating annoyances. And if I made a mistake and my zapping isn't doing what I want, it's easy to fix; since these are one-shot rules, I can just reload the page to start over from scratch. This has already started encouraging me to do away with even more things than before, and just like when I started blocking elements, I feel much happier when I'm reading the resulting pages.
(Going all the way to using Firefox's Reader mode is usually too much of a blunt hammer for most sites, and often I don't care quite that much.)
PS: Now that I think about it, I probably should switch all of my
per-site blocks for Medium's floating bar over to a single
##div.js-stickyFooter' block. It's unlikely to cause any collateral
damage and I suspect it would actually be more memory and CPU
(And I should probably check over my personal block rules in general, although I don't have too many of them.)
My situation with Twitter and my Firefox setup (in which I blame pseudo-XHTML)
Although it is now a little bit awkward to do this, let's start with my tweet:
Twitter does this with a <noscript> meta-refresh, for example:
<noscript><meta http-equiv="refresh" content="0; URL=https://mobile.twitter.com/i/nojs_router?path=%2Fthatcks%2Fstatus%2F877738130656313344"></noscript>
Firefox (via NoScript), Twitter
included, my Firefox acts on this
<noscript> block. What is
supposed to happen here is that you wind up on the mobile version
of the tweet, eg, and
then just sit there with things behaving normally. In my development
tree Firefox, the version of
this page that I get also contains another <noscript> meta-refresh:
<noscript><meta content="0; URL=https://mobile.twitter.com/i/nojs_router?path=%2Fthatcks%2Fstatus%2F877738130656313344" http-equiv="refresh" /></noscript>
This is the same URL as the initial meta-refresh, and so Firefox sits there going through this cycle over and over and over again, and in the mean time I see no content at all, not even the mobile version of the tweet.
In other environments, such as Fedora 25's system version of Firefox 54, Lynx, and wget, the mobile version of the tweet is a page without the circular meta-refresh. At first this difference mystified me, but then I paid close attention to the initial HTML I was seeing in the page source. Here is the start of the broken version:
<!DOCTYPE html> <html dir="ltr" lang="en"> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=1,user-scalable=0" /> <noscript>[...]
(I suspect that this is HTML5.)
And here is the start of the working version:
<?xml version="1.0" encoding="utf-8"?> <!DOCTYPE html PUBLIC "-//WAPFORUM//DTD XHTML Mobile 1.1//EN" "http://www.openmobilealliance.org/tech/DTD/xhtml-mobile11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> [... much more verbiage ...]
Although this claims to be some form of XHTML in its declarations,
Twitter is serving this with a Content-Type of
makes it plain old HTML soup as far as Firefox is concerned (which
is a famous XHTML issue).
What I don't understand is why Twitter serves HTML5 to me in one
browser and pseudo-XHTML to me in another. As far as I can tell,
the only significant thing that differs here between the system
version of Firefox and my custom-compiled one is the User-Agent
(and in particular both are willing to accept XHTML). I can get
Twitter to serve me HTML5 using
wget, but it happens using
either User-Agent string:
--user-agent 'Mozilla/5.0 (X11; Linux x86_64; rv:56.0) Gecko/20100101 Firefox/56.0' https://mobile.twitter.com/thatcks/status/877738130656313344 | less
Sidebar: How I worked around this
Initially I went on a long quest to try to find an extension that would turn this off or some magic trick that would make Firefox ignore it (and I failed). It turns out that what I need is already built into NoScript; the Advanced settings have an option for 'Forbid META redirections inside <NOSCRIPT> elements', which turns off exactly the source of my problems. This applies to all websites, which is a bit broader of a brush than would be ideal, but I'll live with it for now.
(I may find out that this setting breaks other websites that I use, although I hope not.)