Wandering Thoughts archives

2015-06-11

HTTP should be dropped even as a pure Internet transport mechanism

In a comment on my pragmatic view on switching to HTTPS, Aristotle Pagaltzis wrote in part (in two bits I'm replying to separately):

HTTPS basically disables caching. And caching obscures traffic flow by terminating it locally, dispersing and diffusing it.

Plain HTTP caching today has two essentially fatal limits for most people, which is that your 'last mile' ISP can both snoop your traffic and alter it to do things like insert malicious content. Yes, I consider 'ride along' JavaScript ads to be malicious content. The reality is that on today's Internet, your ISP is a threat.

(Your last mile ISP may be doing this on its own behalf, it may be doing it under orders from someone, or it may have had its network quietly altered to do this.)

Since all of these are happening today, it is my view that plain HTTP caching is not worth keeping. It is possible to use it for good, but in practice it and the closely related issue of forced HTTP proxying are too often either used for evil or exploited for evil.

Then:

Now you must reveal to someone that you are interested in certain public content. But if you can verify the integrity of that content you could have a choice of intermediaries to fetch it from, depending on whom you want to reveal your interest to, and whom you want to conceal it from. And no one stops you from using TLS to them in order to shut out intrepid eavesdroppers, of course.

This is clearly a new protocol that carries integrity information along with it and simply uses HTTP as a transport. It's my view that you must use TLS even with this, for what are ultimately pragmatic reasons. If the transport protocol uses unencrypted links such as HTTP, there are two issues.

First, the transport protocol leaks information about what you're reading to your 'last mile' ISP (and possibly others). We already know that ISPs will monitor and exploit this information if it is available. It doesn't matter if you obscure information about where you fetch the resource from; the mere fact that you are receiving web pages about specific subject matter is a deadly giveaway. Expect this information about your browsing habits and your interests to be sold to the usual suspects.

Second, let's consider the user experience of what happens if the ISP takes advantage of this plain text transfer to actually inject its own content. Of course the resource your web browser has fetched fails integrity checks, so it should not show it to you, right? Well, this is the XHTML problem, or perhaps the HTTPS certificate alert problem. The content is there (as the user sees it) except that the browser is not showing it to the user for essentially arbitrary reasons. In my jaundiced view this is not a stable situation and not one that users are going to enjoy. All it needs is one browser to defect to allowing 'show me the content anyway' and then all ISPs are helpfully telling their users to use that option, honest.

Of course an ISP could try to MITM your HTTPS traffic. But we've seen how that show plays out and it doesn't go well for the people doing the interception, primarily but not entirely for social reasons. Without the ability to see and alter cleartext, your ISP is essentially helpless to make alterations; blindly altering the encrypted stream generally creates totally garbled and corrupt results (even ignoring the integrity checks, which will fail).

So my view is that HTTP cannot be used as a transport across the open Internet for anything. At a minimum it can't be used for anything that will wind up in a user browser, and I feel that even with strong integrity checks it leaks too much information to your ISP. Put a fork in it, it's done.

(There are plenty of legacy transport uses of HTTP across the open Internet, of course, so in practice it's not going away any time soon as far as tools are concerned. Browsers are another matter.)

HTTPNotEvenTransport written at 00:15:40; Add Comment

2015-06-10

My pragmatic view on switching to HTTPS and TLS everywhere

Roy Fielding is not a fan of using TLS everywhere (via Aristotle Pagaltzis, among others). He argues, as far as I can understand, that TLS mostly provides confidentiality (and a certain amount of integrity), not privacy; if I have it right, his view of 'privacy' seems to be 'privacy even from the site operator'.

My view on all of this is that I'm a pragmatist. Right now there are real, non-theoretical intermediaries between me and a lot of the people who are reading this who are intercepting and logging HTTP traffic. Some of them are adding individually identifiable tracking information that I could take note of if I wanted to, and some are altering the content for various reasons (including the alteration of 'we'll block access to this now that we see what it is'). None of this is theoretical or obscure and increasingly it's not even uncommon. And it's all going to get worse if we let it because all of the intermediaries involved gain value from doing this kind of stuff; they have been restrained so far only by some combination of lingering legal concerns and technical (or budget) limits.

So I strongly disagree with Fielding when he says:

TLS is NOT desirable for access to public information, except in that it provides an ephemeral form of message integrity that is a weak replacement for content integrity.

First off, that 'except' is a really important thing, as we've seen. By itself I feel that preventing third parties from tampering with web-fetched resources in flight is now a vital concern, since third parties are actually doing it now. But I also disagree about the general issue of access to public information.

Libraries are full of public information, pretty much by definition. Yet librarians zealously guard (and block) access to information about who has checked out what, because they understand that revealing that information can be damaging. What public information you access says a lot about you and your concerns. To stretch the analogy even further, it's useful for librarians to protect your borrowing records even though a sufficiently dedicated third party could deduce much of the information given enough work.

Are HTTPS and TLS perfect? Of course not. Do they still betray some information about your requests? Of course. But they are still the best tool we have at hand to deal with the serious problems that we are having right now. HTTPS everywhere will unquestionably cramp the style of a bunch of people who are up to no good, which beats letting them continue on undisturbed.

(In security, as in much else, the perfect is the enemy of the good.)

(It's also my outsider's opinion that the IETF is probably the wrong place to come up with new cryptography and privacy standards. I suspect that in practice the IETF is better served by recommending and using existing practices such as TLS. Partly because this is because TLS already exists in widely available form, making it easy to adopt and use.)

HTTPSEverywherePragmatics written at 02:13:49; Add Comment

2015-06-03

What makes for a simple web application environment

Suppose that you want to create a simple web application environment, something that can compete with CGI programs and PHP to attract people who have modest needs and just want to throw something programmable up on their website. What does it need to have? As it happens I have some views on this, formed from my long run of using and abusing CGI programs (although I haven't done PHP).

In my opinion, anything that wants to displace CGI or PHP in people's affections needs to be as simple to get going as they are (see eg the attractions of CGI scripts). This means:

  • A simple deployment scheme that is as close to 'copy a single thing and go' as possible. If one step of deploying a new web app in your system is 'edit the web server configuration file', you've already lost. I don't think this requires an in-server environment, the way PHP is in Apache, but I think it does require any service daemons get auto-started and auto-reloaded and you need some simple way of hooking it up to the main server.

    These days I would hold my nose and say that the right answer for simple multi-file deployments is to let people push .zip files to the server and have the server run things from them in place (and fetch things from them and so on). Python already does this to some degree, so there's an existence proof.

  • No persistent state between requests, at least by default. Not having persistent state means that code can still work fine even though it's sloppy, and people are going to write sloppy code in a simple, 'get things done quick' web app environment.

    (I feel somewhat divided about this because it immediately hammers a language I rather like. Maybe you can set up your programming environment to strongly discourage global state and have that be good enough, but I'm kind of dubious. Or maybe Go will turn out to be a special case where goroutines are good enough isolation.)

  • A simple and easy programming model for handling web requests in general. My view is that inspecting an environment and printing things out is quite simple and easy to deal with and thus the more you depart from this, the more difficult your environment is. A 'hello world' web app equivalent should not be very many more lines of code than a plain command line one.

    (Similarly, getting the POST parameters should be either dirt simple or essentially automatic.)

    One important aspect of this is that the programming model should look exactly as if you're handling the request in the main web server. If the main web server forwards requests to the simple web app environment with HTTP, you must hide the seams involved here (cf). This basically means running your own custom protocol on top of HTTP to forward all of the information that HTTP will overwrite, then restoring it before people's code sees it.

The last point leads me to the view that URL routing should not be part of the basic layer of your simple web app environment to any particularly visible extent, since URL routing can get quite complex very fast. Ideally URL routing is part of the deployment process, not the coding process, in the same way that copying a CGI script or a PHP page to a particular place in the directory hierarchy does the 'URL routing' for them.

A simple web app environment can optionally provide more sophisticated features if they don't get in the way, and there's certainly a lot of long term benefit to doing so. But if you're aiming for a simple environment, start with the 'hello world' case and make sure it stays simple.

(Whether creating such a new simple web app environment is possible any more is an open question. It clearly requires significant integration with the main web server, and whether you can do that any more is an open question. CGI and PHP are both kind of relatively unique historical artifacts in Apache, after all. Maybe we're just stuck with no good general simple web app environments to complete with them.)

SimpleAppEnvironmentMakeup written at 01:41:20; Add Comment

2015-05-31

My view of setting up sane web server application delegations

One of the things that drives the appeal of CGI scripts is the easy deployment story compared to, say, a bunch of programs that implement web applications is the deployment configuration problem. When you have a bunch of applications on the same machine (and using the same IP), you need a central web server to take incoming requests and dispatch them out to the appropriate individual web app (based on incoming host and URL), and this central web server needs to be configured somehow. So how do you do this in a way that leads to easy, sane deployment, especially in a multiple user environment where not everyone can sit there editing central configuration files as root?

My views on this have come around to the idea that you want some equivalent of Apache per-directory .htaccess files in a directory structure that directly reflects the hosts and URLs being delegated. There are a couple of reasons for this.

First, a directory based structure creates natural visibility and enforces single ownership of URL and host delegations. If you own www.fred.com/some/url, then you are in charge of that URL and everyone underneath it. No one can screw you up by editing a big configuration file (or a set of them) and missing that /some/url has already been delegated to you somewhere in hundreds of lines of configuration setup; your ownership is sitting there visible in the filesystem and taking it over means taking over your directory, which Unix permissions will forbid without root's involvement.

Second, using some equivalent of .htaccess files creates delegation of configuration and control. Within the scope of the configuration allowed in the .htaccess equivalent, I don't need to involve a sysadmin in what I do to hook up my application, control access to it, have the native master web server handle some file serving for me, or whatever. Of course the minimal approach is to support none of this in the master server (with the only thing the .htaccess equivalent can do is tell the master server how to talk to my web app process), but I think it's useful to do more than that. If nothing else, directly serving static files is a commonly desired feature.

(Apache .htaccess is massively powerful here, which makes it quite useful and basically the gold standard of this. Many master web servers will probably be more minimal.)

To the extent that I can get away with it, I will probably configure all of my future Apache setups this way (at least for personal sites). Unfortunately there are some things you can't configure this way in Apache, often for good reason (for example, mod_wsgi).

(This entry is inspired by a Twitter conversation with @eevee.)

Sidebar: doing this efficiently

Some people will quail at the idea of the master web server doing a whole series of directory and file lookups in the process of handling each request. I have two reactions to this. First, this whole idea is probably not appropriate for high load web servers because on high load web servers you really want and need more central control over the whole process. If your web server machine is already heavily loaded, the last thing you want to do is enable someone to automatically set up a new high-load service on it without involving the (Dev)Ops team.

Second, it's possible to optimize the whole thing via a process of registering and (re)loading configuration setups into the running web server. This creates the possibility of the on-disk configuration not reflecting the running configuration, but that's a tradeoff you pretty much need to make unless you're going to be very restrictive. In this approach you edit your directory structure and then poke the web server with some magic command so that it takes note of your change and redoes its internal routing tables.

IdealServerDelegationSetup written at 01:11:07; Add Comment

2015-05-21

It's time for me to stop using lighttpd

There's another SSL configuration vulnerability going around; this one is called Logjam (also). Part of the suggested fixes for it is to generate your own strong Diffie-Hellman group instead of using one of the default groups, and of course another fix is yet more SSL parameter fiddling. There have been quite a lot of SSL/TLS related issues lately, and many of them have required SSL parameter fiddling at least in the short term.

I've had a long-standing flirtation with lighttpd and my personal site has used it since the start. But this latest SSL issue has crystallized something I've been feeling for a while, which is that lighttpd has not really been keeping up with the SSL times. Lighttpd cannot configure or de-configure a number of things that people want to; for example, it has no option to disable TLS v1.0 or SSL compression (although the latter is probably off in OpenSSL by now). OCSP stapling? You can forget it (from all appearances). In general, the last release of lighttpd 1.4.x was a year ago, which is an eternity in SSL best practices land.

For a while now I've been telling people when they asked me that I couldn't recommend lighttpd for new deployments if they cared about SSL security at all. Since I care increasingly much about SSL myself, it's really time for me to follow my own advice and move away from lighttpd to something else (Apache is the most likely candidate, despite practical annoyances in my environment). It'll be annoying, but in the long run it will be good for me. I'll have a SSL configuration that I have much more trust in and that is much better supported by common resources like Mozilla's SSL configuration generator and configuration guidelines.

There's certainly a part of me that regrets this, since lighttpd is a neat little idea and Apache is kind of a hulking monstrosity. But in practice, what matters on the Internet is that unmaintained software decays. Lighttpd is in practice more or less unmaintained, while Apache is very well maintained (partly because so many people use it).

(Initially I was going to write that dealing with Logjam would push me over the edge right away, but it turns out that the Logjam resources page actually has settings for lighttpd for once.)

AbandoningLighttpd written at 01:08:44; Add Comment

2015-05-20

On the modern web, ISPs are one of your threats

Once upon a time, it was possible to view the Internet as a generally benevolent place as far as your traffic was concerned. Both passive eavesdroppers and man in the middle attacks were uncommon and took generally aggressive attackers to achieve (although it could be done). Eavesdropping attacks were things you mostly worried about on (public) wifi or unusual environments like conference networks.

I am afraid that those days are long over now. On the modern Internet, ISPs themselves are one of your threats (both your ISP and other people's ISPs). ISPs routinely monitor traffic, intercept traffic, modify traffic on the fly both for outgoing requests (eg) and for incoming replies from web servers ('helpfully' injecting hostile JavaScript and HTML into pages is now commonplace), and do other malfeasance. To a certain extent this is more common on mobile Internet than on good old fashioned fixed Internet, but this is not particularly reassuring; an increasing amount of traffic is from mobile devices, and ISPs are or will be adding this sort of stuff to fixed Internet as well because it makes them more money and they like cash.

(See for example the catalog of evil things various ISPs are doing laid out in We're Deprecating HTTP And It's Going To Be Okay (via). Your ISP is no longer your friend.)

The only remedy that the Internet has for this today is strong encryption, with enough source authentication that ISPs cannot shove themselves in the middle without drastic actions. This is fundamentally why it's time for HTTP-only software to die; the modern Internet strongly calls for HTTPS.

This is a fundamental change in the Internet and not a welcome one. But reality is what it is and we get to deal with the Internet we have, not the Internet we used to have and we'd like to still have. And when we're building things that will be used on today's Internet it behooves us to understand what sort of a place we're really dealing with and work accordingly, not cling to a romantic image from the past of a friendlier place.

(If we do nothing and keep naively building for a nicer Internet that no longer exists, it's only going to get worse.)

ISPsAreThreats written at 02:07:09; Add Comment

2015-05-12

It's time to stop coddling software that can't handle HTTPS URLs

A couple of years ago I moved my personal website from plain HTTP to using HTTPS. When I did that, one of the lessons I learned was that there were a certain number of syndication feed fetchers that didn't support HTTPS requests at all. My solution at the time was to sigh and add some bits to my lighttpd configuration so they'd be allowed to still fetch the HTTP version of my syndication feeds Now I'm in the process of moving this blog from HTTP to HTTPS and so I've been considering what I'll do about issues like this for here. This time around my decision is that I'm not going to create any special rules; anything fetching syndication feeds or web pages from here that can't do HTTPS (or follow redirections) is flat out of luck.

There are some pragmatic reasons for this, but ultimately it comes down to that I think it's now clearly time that we stopped accepting and coddling software that can only deal with HTTP URLs. The inevitable changes of the Internet have rendered such software broken. It's clear that HTTPS is increasingly the future of web activity and also clear that a decent number of sites will be moving to it via HTTP to HTTPS redirection. Software that cannot cope with both of these is decaying; the more sites that do this, the more pragmatically broken the software is.

I'm not going to say that you should never give in and accommodate decaying, broken software; if nothing else, I certainly have made some accommodations myself. But when I do that, I do it on a case by case basis and only when I've decided that it's sufficiently important; I don't do it generally. Coddling broken software in general only prolongs the pain, not just for you but for everyone. In this case, the more we accommodate HTTP only software the more traffic remains HTTP (and subject to snooping and alteration) instead of moving to HTTPS. HTTPS is not ideal, but it's clear that an HTTPS network is an improvement over the HTTP one we have today in practice.

This is likely going to hurt me somewhat (and already has, as some Planets (also) that carry Wandering Thoughts apparently haven't coped with this). But even apart from the pragmatic impossibility of trying to pick through all of the request to here to see which aren't successfully transitioning to HTTPS, I'm currently just not willing to coddle such bad software any more. It's 2015. You'd better be ready for the HTTPS transition because it's coming whether you like it or not.

The reason I feel like this now when I didn't originally is pretty simple: more time has passed. The whole situation with HTTP and HTTPS on the Internet has evolved significantly since 2013, and there is now real and steadily increasing momentum behind the HTTPS shift. What was kind of wild eyed and unreasonable in 2013 is increasingly mainstream.

NoMoreHTTPOnlySoftware written at 00:01:55; Add Comment

2015-04-30

I'm considering ways to mass-add URLs to Firefox's history database

I wrote yesterday about how I keep my browser history forever, because it represents the memory of what I've read. A corollary of this is that it bugs me if things I've read don't show up as visited URLs. For example, if all of the blog entries and so on here at Wandering Thoughts were to turn unvisited tomorrow, that'd make me twitch every time I read something here and saw a blue link that should instead be visited purple.

(One of the reasons for this is that links showing visited purple is a sign that they point to the right place. Under normal circumstances, if links on Wandering Thoughts suddenly go blue, something has probably broken. And when I'm drafting entries, a nominal link to an older entry that shows blue is a sign that I got the link wrong.)

Which winds up with the problem: Wandering Thoughts and indeed this entire site is in the process of moving from HTTP to HTTPS. The HTTP versions of all of the entries and so on are in my Firefox history database, but Firefox properly considers the HTTPS version to be a completely different URL and so not in the history. So, all of a sudden, all of my entries and links and so on are unvisited blue. At one level this is not a problem. After all, I know that I've read them all (I wrote them). In theory, I could leave everything here alone, then maybe re-visit links one by one as I use them in new entries or otherwise run across them. But the whole situation bugs me; by now, seeing all the links be purple is reassuring and the way things should be, while blue links here make me twitch.

Conceptually the fix is simple. All I have to do is get every HTTP URL for here out of my existing history database, mechanically turn the 'http:' into 'https:', and then add all of the new URLs to Firefox's history database. All of the last visited and so on values can be exactly copied from the HTTP version of the URL. The only problem is that as far as I know there is no tool or extension for doing this.

(There are plenty of addons for removing history entries, which is of course exactly the opposite of what I want.)

These days, Firefox's history in is a SQLite database (places.sqlite in your profile directory). There are plenty of tools and packages to manipulate SQLite databases, which leaves me with merely the problem of figuring out what actually goes into a history entry in concrete detail (and then calculating everything that isn't obvious). So all of this is achievable, but on the other hand it's clearly going to be a bunch of work.

(While the Places database is documented, parts of this documentation are out of date. In particular, current Firefox places.sqlite has a unique guid field in the moz_places table.)

PS: The other obvious nefarious hack is to literally rewrite the URLs in all current history entries to be 'https:' instead of 'http:', possibly by dumping and then reloading the moz_places table. Assuming that you can change the URL schema without invalidating any linkages in the database, this is simple. Unfortunately it has a brute force inelegance that makes me grumpy; it's clearly the expedient fix instead of the right one.

FirefoxAddHistoryDesire written at 23:36:51; Add Comment

Why I have a perpetual browser history

I've mentioned in passing that I keep my browser's history database basically forever, and I've also kind of mentioned that it drives me up the wall when web sites make visited links and unvisited links look the same. These two things are closely related.

Put simply, the visited versus unvisited distinction between links is a visible, visual representation of your current state of dealing with a (good) site. A visited link tells you 'yep, I've been there, no need to visit again'; an unvisited link tells you that you might want to go follow it. This representation of state is very important because otherwise we must fall back on our fallible, limited, and easily fooled human memories to try to keep track of what we've read and haven't read. This fallback is both error-prone and a cognitive load; mental effort you're spending to keep track of what you've read is mental effort you can't use on reading.

Of course this doesn't work on all sites (and doesn't work all the time even on 'good' sites). I'm sure you can come up with any number of sites and any number of ways that this breaks down, and so the visited versus unvisited state of a page is not important or useful information. But it works well enough on enough sites to be extremely useful in practice, at least for me.

And this is why I want my browser history to last forever. My browser history is the collected state representation of what I have and haven't read. It tracks things not just now, in my currently active browsing session as I work through something, but also back through time, because I don't necessarily forget things I've read long ago (but at the same time I don't necessarily remember them well enough to be absolutely confident that I've already read them). For that matter, I don't always get through big or deep sites in one go, so again the visited link history is a history of how far I've gotten in archives or reference articles or the like.

There is nothing else on the web that can give me this state recall, nothing else that serves to keep track of 'how far have I gotten' and 'have I already seen this'. The web without it is a much more spastic and hyperactive place. It's a relatively more hyperactive place if I only have a short-term state recall; I really do want mine to last basically forever.

(In fact for me anything without a read versus unread state indicator is an irritatingly spastic and hyperactive place. All sorts of things are vastly improved by having it, and lack of it causes me annoyance (and that example is on the web).)

BrowserHistoryForever written at 00:14:42; Add Comment

2015-04-10

My Firefox 37 extensions and addons (sort of)

A lot has changed in less than a year since I last tried to do a comprehensive inventory of my extensions, so I've decided it's time for an update since things seem to have stabilized for the moment. I'm labeling this as for Firefox 37 since that's the just out latest version, but I'm actually running Firefox Nightly (although for me it's more like 'Firefox Weekly', since I only bother quitting Firefox to switch to the very latest build once in a while). I don't think any of these extension work better in Nightly than in Firefox 37 (if anything, some of them may work better in F37).

Personally I hope I'm still using this set of extensions a year from now, but with Firefox (and its addons) you never know.

Safe browsing:

  • NoScript to disable JavaScript for almost everything. In a lot of cases I don't even bother with temporary whitelisting; if a site looks like it's going to want lots of JavaScript, I just fire it up in my Chrome Incognito environment.

    NoScript is about half of my Flash blocking, but is not the only thing I have to rely on these days.

  • FlashStopper is the other half of my Flash blocking and my current solution to my Flash video hassles on YouTube, after FlashBlock ended up falling over. Note that contrary to what its name might lead you to expect, FlashStopper blocks HTML5 video too, with no additional extension needed.

    (In theory I should be able to deal with YouTube with NoScript alone, and this even works in my testing Firefox. Just not in my main one for some reason. FlashStopper is in some ways nicer than using NoScript for this; for instance, you see preview pictures for YouTube videos instead of a big 'this is blocked' marker.)

  • µBlock has replaced the AdBlock family as my ad blocker. As mentioned I mostly have this because throwing out YouTube ads makes YouTube massively nicer to use. Just as other people have found, µBlock clearly takes up the least memory out of all of the options I've tried.

    (While I'm probably not all that vulnerable to ad security issues, it doesn't hurt my mood that µBlock deals with these too.)

  • CS Lite Mod is my current 'works on modern Firefox versions' replacement for CookieSafe after CookieSafe's UI broke for me recently (I needed to whitelist a domain and discovered I couldn't any more). It appears to basically work just like CookieSafe did, so I'm happy.

I've considered switching to Self-Destructing Cookies, but how SDC mostly works is not how I want to deal with cookies. It would be a good option if I had to use a lot of cookie-requiring sites that I didn't trust for long, but I don't; instead I either trust sites completely or don't want to accept cookies from them at all. Maybe I'm missing out on some conveniences that SDC would give me by (temporarily) accepting more cookies, but so far I'm not seeing it.

My views on Ghostery haven't changed since last time. It seems especially pointless now that I'm using µBlock, although I may be jumping to assumptions here.

User interface (in a broad sense):

  • FireGestures. I remain absolutely addicted to controlling my browser with gestures and this works great.

    (Lack of good gestures support is the single largest reason I won't be using Chrome regularly any time soon (cf).)

  • It's All Text! handily deals with how browsers make bad editors. I use it a bunch these days, and in particular almost of my comments here on Wandering Thoughts are now written with it, even relatively short ones.

  • Open in Browser because most of the time I do not want to download a PDF or a text file or a whatever, I want to view it right then and there in the browser and then close the window to go on with something else. Downloading things is a pain in the rear, at least on Linux.

(I wrote more extensive commentary on these addons last time. I don't feel like copying it all from there and I have nothing much new to say.)

Miscellaneous:

  • HTTPS Everywhere basically because I feel like using HTTPS more. This sometimes degrades or breaks sites that I try to browse, but most of my browsing is not particularly important so I just close the window and go do something else (often something more productive).

  • CipherFox gives me access to some more information about TLS connections, although I'd like a little bit more (like whether or not a connection has perfect forward secrecy). Chrome gets this right even in the base browser, so I wish Firefox could copy them and basically be done.

Many of these addons like to plant buttons somewhere in your browser window. The only one of these that I tolerate is NoScript's, because I use that one reasonably often. Everyone else's button gets exiled to the additional dropdown menu where they work pretty fine on the rare occasions when I need them.

(I would put more addon buttons in the tab bar area if they weren't colourful. As it is, I find the bright buttons too distracting next to the native Firefox menu icons I put there.)

I've been running this combination of addons in Firefox Nightly sessions that are now old enough that I feel pretty confident that they don't leak memory. This is unlike any number of other addons and combinations that I've tried; something in my usage patterns seems to be really good at making Firefox extensions leak memory. This is one reason I'm so stuck on many of my choices and so reluctant to experiment with new addons.

(I would like to be able to use Greasemonkey and Stylish but both of them leak memory for me, or at least did the last time I bothered to test them.)

PS: Firefox Nightly has for some time been trying to get people to try out Electrolysis, their multi-process architecture. I don't use it, partly because any number of these extensions don't work with it and probably never will. You can apparently check the 'e10s' status of addons here; I see that NoScript is not e10s ready, for example, which completely rules out e10s for me. Hopefully Mozilla won't be stupid enough to eventually force e10s and thus break a bunch of these addons.

Firefox37Extensions written at 02:14:56; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.