Private browsing mode versus a browser set to keep nothing on exit
These days, apparently a steadily increasing variety of websites are refusing to let you visit their site if you're in private browsing or incognito mode. These websites are advertising that their business model is invading your privacy (not that that's news), but what I find interesting is that these sites don't react when I visit them in a Firefox that has a custom history setting of 'clear history when Firefox closes'. As far as I can tell this still purges cookies and other website traces as effectively as private browsing mode does, and it has the side benefit for me that Firefox is willing to remember website logins.
(I discovered this difference between the two modes in the aftermath of moving away from Chrome.)
So, this is where I say that everyone should do this instead of using private browsing mode? No, not at all. To be bluntly honest, my solution is barely usable for me, never mind someone who isn't completely familiar with Firefox profiles and capable of wiring up a complex environment that makes it relatively easy to open a URL in a particular profile. Unfortunately Firefox profiles are not particularly usable, so much so that Firefox had to invent an entire additional concept (container tabs) in order to get a reasonably approachable version.
(Plus, of course, Private Browsing/Incognito is effectively a special purpose profile. It's so successful in large part because browsers have worked hard to make it extremely accessible.)
Firefox stores and tracks cookies (and presumably local storage) on a per-container basis, for obvious reasons, but apparently doesn't have per-container settings for how long they last or when they get purged. Your browsing history is global; history entries are not tagged with what container they're from. Mozilla's Firefox Multi-Account Containers addon looks like it makes containers more flexible and usable, but I don't think it changes how cookies work here, unfortunately; if you keep cookies in general, you keep them for all containers.
I don't think you can see what container a given cookie comes from through Firefox's normal Preferences stuff, but you can with addons like Cookie Quick Manager. Interestingly, it turns out that Cookie AutoDelete can be set to be container aware, with different rules for different containers. Although I haven't tried to do this, I suspect that you could set CAD so that your 'default' container (ie your normal Firefox session) kept cookies but you had another container that always threw them away, and then set Multi-Account Containers so that selected annoying websites always opened in that special 'CAD throws away all cookies' container.
(As covered in the Cookie AutoDelete wiki, CAD can't selectively remove Firefox localstorage for a site in only some containers; it's all or nothing. If you've set up a pseudo-private mode container for some websites, you probably don't care about this. It may even be a feature that any localstorage they snuck onto you in another container gets thrown away.)
A sign of people's fading belief in RSS syndication
Every so often these days, someone asks me if my blog supports RSS (or if I can add RSS support to it). These perfectly well meaning and innocent requests tell me two things, one of them obvious and one of them somewhat less so.
(To be completely clear about this: these people are pointing out a shortfall of my site design and are not to blame in any way. It is my fault that although Wandering Thoughts has a syndication feed, they can't spot it.)
The obvious thing is that Wandering Thoughts' current tiny little label and link at the bottom of some pages, the one that says 'Atom Syndication: Recent Pages', is no longer anywhere near enough to tell people that there is RSS here (much less draw their clear attention to it). Not only is it in a quite small font but it has all sorts of wording problems. Today, probably not very many people know that Atom is a syndication feed format, and even if they do, labelling it 'recent pages' is not very meaningful to someone who is looking for my blog's syndication feed.
(The 'recent pages' label is due to DWiki's existence as a general wiki engine that can layer a blog style chronological view on top of a portion of the URL hierarchy. From DWiki's perspective, all of my entries are wiki pages; they just get presented with some trimmings. I'm going to have to think about how best to fix this, which means that changes may take a while.)
The less obvious thing is that people often no longer believe that even obvious places have RSS feeds, especially well set up ones. You see, DWiki has syndication feed autodiscovery, where if you tell your feed reader the URL of Wandering Thoughts, it will automatically find the actual feed from there. In the days when RSS was pervasive and routine, you didn't look around for an RSS feed link or ask people; you just threw the place's main URL into your feed reader and it all worked, because of course everyone had an RSS feed and feed autodiscovery. One way or another, people evidently don't believe that any more, and I can't blame them; even among places with syndication feeds, an increasing number of them don't have working feed autodiscovery (cf, for one example I recently encountered).
(People could also just not know about feed autodiscovery, but if feed autodiscovery worked reliably, I'm pretty sure that people would know about it as 'that's just how you add a place to your feed reader'.)
In other words, we've reached a point where people's belief in RSS has faded sufficiently that it makes perfect sense to them that a technical blog might not even have an RSS feed. They know what RSS is and they want it, but they don't believe it's automatically going to be there and they sort of assume it's not going to be. Syndication feeds have changed from a routine thing everyone had to a special flavour that you hope for but aren't too surprised when it's not present.
(The existence of syndication feed discovery in general is part of why the in-page labels for DWiki's syndication feeds are so subdued. When I put them together many years ago, I'm pretty sure that I expected feed autodiscovery would be the primary means of using DWiki's feeds and the in-page labels would only be a fallback.)
Staying away from Google Chrome after six months or so
Just short of six months ago, I wrote Walking away from Google Chrome, about how I had decided to stop using Chrome and only use Firefox. Although I didn't mention it in the entry, I implicitly included Chromium in this, which was really easy because I don't even have it installed on my Linux machines.
(A version of Chromium is available in Fedora, but it seems to be slightly outdated and I was always using Chrome in large part because of Google's bundled Flash, which is not in the open source Chromium build.)
Overall, I remain convinced that this is something that's worth doing, however small the impact of it may be. Subsequent developments in the Chrome world have reinforced both the alarming nature of Chrome's dominance and that Chrome's developers are either shockingly naive or deliberately working to cripple popular adblocking and content filtering extensions (see here, here, and here). Using Firefox is a little gesture against the former, however tiny, and provides me with some insulation from the latter, which it seems rather likely that Google will ram through sooner or later.
(It is not complete insulation, since many of the crucial extensions I use are developed for both Firefox and Chrome. One way or another, their development and use on Firefox would probably be affected by any Chrome changes here, if only because their authors might wind up with fewer users and less motivation to work on their addons.)
On a practical level I've mostly not had any problems sticking to this. My habits and reflexes proved more amenable to change than I was afraid of, and I haven't really had any problems with websites that made me want to just hit them with my incognito Chrome hammer. I've deliberately run Chrome a few times to test how some things behaved in it as compared to Firefox, but that's about it for my Chrome usage over the past six months (although I did have to do some initial work to hunt down various scripts that were using Chrome as their browser for various reasons).
My only significant use of Chrome was as my 'accept everything, make things work' browser. As I mentioned in my initial entry, in several ways Firefox works clearly better for this, and I've come to be more and more appreciative of them over the past six months. Cut and paste just works, Firefox requires no song and dance to remember my passwords, and so on. At this point I would find it reasonably annoying to switch much of my use back to Chrome.
The plague of 'you've logged in to our site again' notification emails
Several years ago, Twitter picked up a pretty annoying habit; it sent you email every time you logged in in a clean browser session. Twitter is not the only site to do this, and my perception is that this behavior is growing steadily (this may or may not be the reality; it may be that I've just started to need to log in to more sites that behave like this).
As far as I've ever seen in limited experimentation, the sites doing this are not applying any sort of intelligence or significant rate-limiting to the process. It doesn't matter how many times you've already logged in from the same IP with the same user-agent, and it doesn't really seem to matter how many times they've already sent you email that week; log in again and you'll get a new, nominally helpful email. And of course there's usually no way to tell the site to turn this off.
Perhaps there are some people in these companies that sincerely think that this is helping account security. If there are, I'm confident that they're completely wrong, simply because of the problem of false positives, a problem that is magnified due to how dominant email systems like GMail deal with email that users find of low value.
As a cynical person, I've always assumed that part of the reason for these reminders is not for security but to attempt to persuade people to stay logged in to the site. The kindest view of this is that the site is trying to increase engagement by getting you to reduce the friction of using it. The less kind view is that the site really wants to track you in detail, either just your actions on the site itself or as you move around the web (using various mechanisms).
(I'm willing to believe that on some sites, constant reminders are partly a 'well, we did something' means of providing people with excuses.)
PS: The more websites do this, the more I wish for a 'copy profile' option in Firefox. Perhaps I should look into container tabs to see if I can arrange something, likely using Multi-Account Containers.
An unpleasant surprise with part of Apache's
Suppose, not entirely hypothetically, that you have a general directory hierarchy for your web server's document root, and you allow users to own and maintain subdirectories in it. In order to be friendly to users, you configure this hierarchy like the following:
Options SymLinksIfOwnerMatch AllowOverride FileInfo AuthConfig Limit Options Indexes
This allows people to use
.htaccess files in their subdirectories
to do things like disable symlinks or enable automatic directory
indexes (which you have turned off here by default in order to avoid
unpleasant accidents, but which is inconvenient if people actually
have a directory of stuff that they just want to expose).
Congratulations, you have just armed a gun pointed at your foot.
Someday you may look at a random person's
.htaccess in their
subdirectory and discover:
Options +ExecCGI AddHandler cgi-script .cgi
You see, as the fine documentation
will explicitly tell you, the innocent looking '
Options' does exactly what it says on the can; it allows
files to turn on any Options directive. Some of
these options are harmless, such as '
Options Indexes', while
others of them are probably things that you don't want people turning
on on their own without talking to you first.
(People can also turn on the full '
Options +Includes', which also
allows them to run programs through the '
#exec' element, as covered
in mod_include's documentation. For that
matter, you may not want to allow them to turn on even the more modest
To deal with this, you need to restrict what
Options people can
control, something like:
AllowOverride [...] Options=Indexes,[...] [...]
Options= list is not just the options that people can turn
on, it is also the options that you let them turn off, for example
if they don't want symlinks to work at all in their subdirectory
(It's kind of a pity that
Options is such a grab-bag assortment of
things, but that's history for you.)
As an additional note, changing your '
settings after the fact may be awkward, because any
file with a now-disallowed
Options setting will cause the entire
subdirectory hierarchy to become inaccessible. This may bias you
toward very conservative initial settings until people appeal, and
then perhaps narrow exemptions afterward.
(Our web server is generously configured for historical reasons; it has been there for a long time and defaults were much looser in the past, so people made use of them. We would likely have a rather different setup if we were recreating the content and configuration today from scratch.)
Thinking about the merits of 'universal' URL structures
I am reasonably fond of my URLs here on Wandering Thoughts (although I've made a mistake or two in their design), but I have potentially made life more difficult for a future me in how I've designed them. The two difficulties I've given to a future self are that my URLs are bare pages, without any extension on the end of their name, and that displaying some important pages requires a query parameter.
The former is actually quite common out there on the Internet, as
many people consider the
.htm) to be ugly and
unaesthetic. You can find lots and lots of things that leave off
.html, at this point perhaps more than leave it on. But it
does have one drawback, which is that it makes it potentially harder
to move your content around. If you use URLs that look like
/a/b/page', you need a web server environment that can serve
text/html, either by running a server-side app (as I do
with DWiki) or by suitable server configuration so that such
extension-less files are
text/html. Meanwhile, pretty much anything
is going to serve a hierarchy of
.html files correctly. In that
.html on the end is what I'll call a universal URL
What makes a URL structure universal is that in a pinch, pretty much any web server will do to serve a static version of your files. You don't need the ability to run things on the server and you don't need any power over the server configuration (and thus even if you have the power, you don't have to use it). Did your main web server explode? Well, you can quickly dump a static version of important pages on a secondary server somewhere, bring it up with minimal configuration work, and serve the same URLs. Whatever happens, the odds are good that you can find somewhere to host your content with the same URLs.
I think that right now there are only two such universal URL
structures; plain pages with
.html on the end, and directories
(ie, structuring everything as '
/a/b/page/'). The specific
mechanisms of giving a directory an index page of some kind will
vary, but probably most everything can actually do it.
On the other hand, at this point in the evolution of the web and
the Internet in general it doesn't make sense to worry about this.
Clever URLs without
.html and so on are extremely common, so it
seems very likely that you'll always be able to do this without too
much work. Maybe one convenient source of publishing your pages
won't support it but you'll be able to find another, or easily
search for configuration recipes on the web server of your choice
for how to do it.
(For example, in doing some casual research for this entry I
discovered that Github Pages lets you omit the
.html on URLs for
things that actually have them in the underlying repository. Github's
server side handling of this automatically makes it all work. See
this stackoverflow Q&A,
and you can test it for yourself on your favorite Github Pages site,
eg. I looked at Github Pages
because I was thinking of it as an example of almost no effort
hosting one might reach for in a pinch, and here it is already
supporting what you'd need.)
PS: Having query parameters on your URLs will make your life harder
here; you probably need either server side access to something on the
order of Apache's
relevant pages that will look for any query parameters and do magic
things with them that will either provide the right page content or at
least redirect to a better URL.
A new drawback of using my custom-compiled Firefox
For years I've used a custom-compiled Firefox, with various personal modifications. Usually this works okay and I basically don't notice any difference between my version and the official version except that the branding is a bit different (and since I build from the development tree, I'm usually effectively a Firefox version or two ahead). However, I've now run into a new drawback, one that hadn't even crossed my radar until recently.
The short version is that I read a spate of news coverage of what compiler Firefox was using, starting in September with the news that Firefox was switching to clang with LTO but really picking up steam in December with some comparisons of how Firefox builds with GCC and clang compared (part 1, part 2), and then Fedora people first considered using clang (with LTO) themselves and then improved GCC so they could stick with it while still getting LTO and PGO (via Fedora Planet/People). All of this got me to try building my own Firefox with LTO (using clang), because once I paid attention the performance improvement of LTO looked kind of attractive.
I failed. I don't know if it's my set of packages, how my Fedora machines are set up, or that I don't actually know what I'm doing about configuring Firefox to build with LTO (Link-Time Optimization), but the short version is that all of my build attempts errored out and I ran out of energy to try to get it going; my personal Firefox builds are still plain non-LTO ones, which means that I'm missing out on some performance. I'm also missing out on additional performance since I would probably never try to get the PGO (Profile-Guided Optimization) bits working, as that seems even more complicated tha LTO.
In the long run hopefully I'll be able to build my own version of Firefox with LTO and most of this will be irrelevant (because I'll have most of the performance of official Fedora and Mozilla builds). I'm happy to do it with either GCC or clang, whichever is easier to get going (I'd say 'works better', but I'm honest; I'll pick whichever is less hassle for me). Even if I can't get LTO going, I'm not likely to give up on my custom-compiled Firefox because my patches are fairly important to me. But the whole LTO experience has certainly given me something to think about.
(Chrome is a much more extreme case for differences between official builds and your own work or even Chromium, because only the official Google Chrome versions come with Flash magically built in. There are things that still might need Flash today, although fewer than there used to be. Your Linux distribution's Chromium builds probably come with much less Google surveillance, though.)
Why I still have a custom-compiled Firefox (early 2019 edition)
For years, I've had a custom-compiled version of Firefox with various personal modifications, generally built from the current development tree. The number of modifications has fluctuated significantly over time; when I first wrote about my history of custom-compiling Firefox in this 2012 entry, it was probably my minimal point for modifications. These days my version has added significantly more changes from the stock version, in larger part due to Firefox's switch to WebExtensions. The somewhat unfortunate thing about this increase in changes is that having this custom Firefox is now more than a little bit important to get the Firefox user interface I really want. Abandoning my custom-compiled Firefox would be something that I'd definitely notice.
The largest set of changes are to deal with Firefox irritations and limitations. In the irritations department, I modify Firefox's current media autoplay code to turn off autoplay for a couple of things that Firefox doesn't otherwise allow you to stop (bare videos and videos with no audio track). In the limitations department, I add a couple of new WebExtensions APIs, which turns out to be surprisingly easy; one API provides 'view page in no style', and the other provides an API to open your genuine home page (as if you did Control-N), which is not otherwise possible in standard Firefox.
(A WebExt can open
about:home, but that is actually
not your genuine home page. My actual home page
file: URL, which can't be opened by WebExt addons.)
My longest standing change is customizing how Firefox's remote access works, which these days also has me customizing the DBus remote control. The current development tree for Firefox seems to go back and forth about whether DBus should be used under X, but I cover my bases to be sure.
For extremely historical reasons I change the Delete key to act like the Backspace key in HTML context. This is probably surplus now, because several years ago I stopped swapping Backspace and Delete so now the key I reflexively hit to scroll the page up generates a Backspace, not Delete. Anyway, these days I often use Control-Space instead, because that works even in stock Firefox setups.
browser.backspace_action setting, and
I don't think it's exposed in the Preferences UI any more. I don't
think I'm quite up to abandoning Backspace entirely just yet,
I modify Firefox's standard branding because on the one hand, I
don't want my builds to be called 'Nightly' in window titles and
so on, and on the other hand I don't want them to use the official
icons or otherwise actually be official builds. I also turn out
to have some small changes to the default preferences, in the
all.js file. I could probably do most or all of these in my own
prefs.js; they linger in
all.js due to historical inertia.
Finally, a few years ago I did a little about the mess that is
Firefox's certificate manager UI by
changing Firefox's name for 'private tokens' from 'Software Security
Device' to the generally more accurate 'Locally Stored Token'. I'm
not sure this genuinely improves things and perhaps I should drop
this change just to be more standard.
(I used to manually modify my
certdata.txt to remove various
CAs that I didn't like, but these days I've concluded it's too
much work and I use the stock one.)
Building Firefox from source, even from the development tree, does have some potentially useful side effects. For a start, custom built versions appear not to report telemetry to Mozilla, which I consider useful given Mozilla's ongoing issues. However it can also have some drawbacks (apart from those inherent in using the latest development tree), which is a matter for another entry.
As a side note, it's interesting to see that back in my 2012 entry, I'd switched from building from the development tree to building from the released source tree. I changed back to building from the development tree at some point, but I'm not sure exactly when I did that or why. Here in the Firefox Quantum era, my feeling is that using the development tree will be useful for a few years to come until the WebExts APIs get fully developed and stabilized (maybe we'll even get improvements to some irritating limitations).
(It's possible that I shifted to modifying and regularly updating the development tree because it made it easier to maintain my local changes. The drawback of modifying a release tree is that it only updates occasionally and the updates are large.)
You shouldn't allow Firefox to recommend things to you any more
The sad Firefox news of the time interval is Mozilla: Ad on Firefox’s new tab page was just another experiment, and also on Reddit. The important quote from the article is:
Some Firefox users yesterday started seeing an ad in the desktop version of the browser. It offers users a $20 Amazon gift card in return for booking your next hotel stay via Booking.com. We reached out to Mozilla, which confirmed the ad was a Firefox experiment and that no user data was being shared with its partners.
Mozilla of course claims that this was not an "ad"; to quote their spokesperson from the article:
“This snippet was an experiment to provide more value to Firefox users through offers provided by a partner,” a Mozilla spokesperson told VentureBeat. “It was not a paid placement or advertisement. [...]
This is horseshit, as the article notes. Regardless of whether Mozilla was getting paid for it, it was totally an ad, and that means that it is on the slippery slope towards all of the things that come with ads in general, including and especially ad-driven surveillance and data gathering. Mozilla even admitted that there was some degree of data gathering involved:
“About 25 percent of the U.S. audience who were using the latest edition of Firefox within the past five days were eligible to see it.”
In order to know who is in 'the US audience', Mozilla is collecting data on you and using it for ad targeting.
So, sadly, we've reached the point where you should go into your Firefox Preferences and disable every single thing that Mozilla would like to 'recommend' to you on your home page (or elsewhere). At the moment that is in the Home tab of Preferences, and is only 'Recommended by Pocket' and 'Snippets'; however, you should probably check back in every new version of Firefox to see if Mozilla has added anything new. This goes along with turning off Mozilla's ability to run Firefox studies and collect data from you and probably not running Firefox Nightly.
This may or may not prevent Mozilla from gathering data on you, but at least you've made your views clear to Mozilla and they can't honestly claim that they're acting innocently (as with SHIELD studies). They'll do so anyway, because that's how Mozilla is now, but we do what we can do. In fact, this specific issue is a manifestation of what I wrote in the aftermath of last year's explosion, where Mozilla promised to stop abusing the SHIELD system but that was mostly empty because they had other mechanisms available that would abuse people's trust in them. They have now demonstrated this by their use of the 'Snippets' system to push ads on people, and they're probably going to use every other technical mechanism that they have sooner or later.
The obvious end point is that Mozilla will resort to pushing this sort of thing as part of Firefox version updates, which means that you will have to inspect every new version carefully (at least all of the preferences) and perhaps stop upgrading or switch to custom builds of Firefox that have things stripped out, perhaps GNU IceCat.
(Possibly Debian will strip these things out of their version of Firefox should this come to pass. I wouldn't count on Ubuntu to do so. People on Windows or OS X are unfortunately on their own.)
PS: Chrome and Chromium are still probably worse from a privacy perspective, and they are certainly worse for addons safety, which you should definitely be worried about if you use addons at all.
Why our Grafana URLs always require HTTP Basic Authentication
As part of our new metrics and monitoring setup, we have a Grafana server for our dashboards that sits behind an Apache reverse proxy. The Apache server also acts as a reverse proxy for several other things, all of which live behind the same website under different URLs.
People here would like to be able to directly access our Grafana dashboards from the outside world without having to bring up a VPN or the like. We're not comfortable with exposing Grafana or our dashboards to the unrestricted Internet, so that external access needs to be limited and authenticated. As usual, we've used our standard approach of Apache HTTP Basic Authentication, restricting the list of users to system staff.
Now, having to authenticate all of the time to see dashboards is annoying, so it would be nice to offer basic anonymous access to Grafana for people who are on our inside networks (and Grafana itself supports anonymous access). Apache can support this in combination with HTTP Basic Authentication; you just use a RequireAny block. Here's an example:
<Location ...> AuthType Basic [...] <RequireAny> Require ip 127.0.0.0/8 Require ip 188.8.131.52/24 [...] Require valid-user </RequireAny> </Location>
People outside the listed networks will be forced to use Basic Auth; people on them get anonymous access.
It's also useful for system staff to have accounts in Grafana itself, because having a Grafana account means you can build your own dashboards and maybe even edit our existing ones (or share your dashboards with other staff members and edit them and so on). Grafana supports a number of ways of doing this, including local in-Grafana accounts with separate passwords, LDAP authentication, and HTTP Basic Authentication. For obvious reasons, we don't want people to have to obtain and manage separate Grafana accounts (it would be a pain in the rear for everyone). Since we're already using HTTP Basic Authentication to control some access to Grafana, reusing that for Grafana accounts makes a lot of sense; for instance, if you're accessing the server from the outside, it means that you don't have to first authenticate to Apache and then log in to Grafana if you want non-anonymous access.
But this hypothetical setup leaves us with a problem: how do you log in to Grafana when you're on our inside networks, where you won't be required to use HTTP Basic Authentication? It would be a terrible experience if you could only use your Grafana account if you weren't at work.
Before I set the server up and started experimenting, what I was
hoping was that HTTP Basic Authentication was treated somewhat like
cookies, in that once a browser was challenged to authenticate, it
would then send the relevant
Authorization header on all further
accesses to the entire website. There are other areas of our web
server that always require HTTP Basic Authentication, even from our
internal networks, so if Basic Auth worked like cookies, you could
go to one of them to force Basic Auth on, then go to a Grafana URL
and the browser would automatically send an
and Apache would pass it to Grafana and Grafana would have you
logged in to your account.
Unfortunately browsers do not treat HTTP Basic Authentication this
way, which is not really surprising since RFC 7617 recommends a
different approach in section 2.2. What RFC 7617
recommends and what I believe browsers do is that HTTP Basic
Authentication is scoped to a URL path on the server. Browsers will
only preemptively send the
Authorization header to things in the
same directory or under it; they won't send it to other, unrelated
(If a browser gets a '401 Unauthorized' reply that asks for a realm that the browser knows the authorization for, it will automatically retry with that authorization. But then you're requiring HTTP Basic Authentication in general.)
The simplest, least hacky way out of this for us is to give up on the idea of anonymous access to Grafana, so that's what we've done. And that is why access to our Grafana URLs always requires HTTP Basic Authentication, however somewhat inconvenient and annoying it is. We have to always require HTTP Basic Authentication so that people can have and readily use frictionless Grafana accounts.
(As I mentioned in my entry on why we like Apache HTTP Basic Authentication, we're not willing to trust the authentication of requests from the Internet to Grafana itself. There are too many things that could go wrong even if Grafana was using, say, a LDAP backend. Fundamentally Grafana is not security software; it's written by people for whom security and authentication is secondary to dashboards and graphs.)
Sidebar: The theoretical hack around this
In theory, if browsers behave as RFC 7617 suggests, we can get
around this with a hack. The most straightforward way is to have a
web page at the root of the web server that we've specifically
configured to require HTTP Basic Authentication; call this page
/login.html. When you visit this page and get challenged, in
theory your browser will decide that the scope of the authentication
is the entire web server and thus send the
on all further requests to the server, including to Grafana URLs.
However I'm not sure this actually works in all common browsers (I haven't tested it) and it feels like a fragile and hard to explain thing. 'Go to this unrelated URL to log in to your Grafana account' just sounds wrong. 'You always have to use HTTP Basic Authentication' is at least a lot simpler.