Wandering Thoughts

2017-10-07

I'm trying out smooth scrolling in my browsers

When I started out using web browsers, there was no such thing as browser smooth scrolling; graphics performance was sufficiently poor that the idea would have been absurd. When you tapped the spacebar, the browser page jumped, and that was that. When graphics sped up and browsers started taking advantage of it, not only was I very used to my jump scrolling but I was running on Linux (and on old hardware), so the smooth scrolling I got did not exactly feel very nice to me. The upshot was that I immediately turned it off in Firefox, and ever since then I've carried that forward (and explicitly turned it off in Chrome as well, and so on).

I've recently reversed that, switching over to letting my browsers use smooth scrolling. Although my reversal here was sudden, it's the culmination of a number of things. The first is that I made some font setup changes that produced a cascade of appearance changes in my browsers, so I was fiddling with my browser setup anyway. The font change itself was part of reconsidering long term habits that maybe weren't actually the right choice, and on top of that I read Pavel Fatin's Scrolling with pleasure, especially this bit:

What is smooth scrolling good for — isn’t it just “bells and whistles”? Nothing of the kind! Human visual system is adapted to deal with real-world objects, which are moving continuously. Abrupt image transitions place a burden on the visual system, and increase user’s cognitive load (because it takes additional time and effort to consciously match the before and after pictures).

A number of studies have demonstrated measurable benefits from smooth scrolling, [...]

On the one hand, I didn't feel like I had additional cognitive load because of my jump scrolling; if anything, it felt like jump scroll was easier than smooth scroll. On the other hand, people (myself included) are just terrible at introspection; my feelings could be completely wrong (and probably were), especially since I was so acclimatized to jump scrolling and smooth scrolling was new and strange.

Finally, in practice I've been doing 'tap the spacebar' whole page scrolling less and less for some time. Increasingly I scroll only a little bit at a time anyway, using a scroll wheel or a gesture. That made the change to smooth scrolling less important and also suggested to me that maybe there was something to the idea of a more continuous, less jumpy scroll, since I seemed to prefer something like it.

At this point I've been using browser smooth scrolling for more than a month. I'm not sure if it's a huge change, and it certainly doesn't feel as big of a change as my new fonts. In some quick experiments, it's clear that web pages scroll slower with smooth scrolling turned on, but at the same time that's also clearly deliberate; jump scroll is basically instant, while smooth scrolling has to use some time to actually be smooth. Switching to jump scrolling for the test felt disorienting and made it hard to keep track of where things were on the page, so at the least I've become used to how to work with smooth scrolling and I've fallen out of practice with jump scrolling on web pages.

On the whole I don't regret the change so far and I can even believe that it's quietly good for me. I expect that I'll stick with it.

(I admit that one reason I was willing to make the switch was my expectation that sooner or later both Firefox and Chrome were just going to take away the option of jump scrolling. Even if I wind up in the same place in the end, I'd rather jump early than be pushed late.)

TryingSmoothScrolling written at 01:05:44; Add Comment

2017-10-04

My new worry about Firefox 56 and the addons that I care about

In light of Firefox 57 and the state of old addons, where my old addons don't work in Firefox Nightly (or didn't half a month ago), my plan is to use Firefox 56 for as long as possible. By 'as long as possible', I mean well beyond when Firefox 56 is officially supported; I hope to keep using it until it actively breaks or is too alarmingly insecure for me. Using an actual released version of Firefox instead of a development version is an odd feeling, but now that it's out, I'm seeing a trend in my addons that is giving me a new thing to worry about. Namely, a number of them are switching over to being WebExtensions addons as preparation for Firefox 57.

This switch would be nothing to worry about if Firefox's WebExtensions API was complete (and Firefox 56 had the complete API), but one of the problems with this whole switch is exactly that Firefox's WE API is far from complete. There are already any number of 'we need feature X' bug reports from various extensions, and I'm sure that more are coming as people attempt to migrate more things to WebExtensions so that they'll survive the transition to Firefox 57. Unless things go terribly wrong, this means that future versions of Firefox are going to pick up more and more WebExtensions APIs that aren't in Firefox 56, and addon authors are going to start using those APIs.

In short, it seems quite likely that sticking with Firefox 56 is also going to mean sticking with older versions of addons. Possibly I'll have to manually curate and freeze my addons to pre WebExtensions versions in order to get fully working addons (especially ones that don't leak memory, which is once again a problem with my Firefox setup, probably because recent updates to various addons have problems).

(Fortunately Mozilla generally or always makes older versions of addons installable on addons.mozilla.org if you poke the right things. But I'm not looking forward to bisecting addon versions to find ones that work right and don't leak memory too fast.)

The optimistic view of the current situation with Firefox addons is that the WebExtensions versions of popular addons are basically beta at this point because of relatively low use. With Firefox 56 released, people moving more aggressively to be ready for Firefox 57, and the (much) greater use of WebExtensions addons, the quality of WE addons will go up fairly rapidly even with the current Firefox WebExtensions APIs. This could give me stable and hopefully non-leaking addons before people move on and addons become incompatible with Firefox 56.

Firefox56AddonWorry written at 02:03:06; Add Comment

2017-09-15

Firefox 57 and the state of old pre-WebExtensions addons

Back in June I wrote an entry about the current state of pre-WebExtensions addons and what I thought was the likely future of them. Based on Mozilla's wiki page on this, which said that legacy extensions could still be loaded on Nightly builds with a preference, I said I expected that they would keep working well into the future. As far as I can currently tell, I was wrong about that and my optimistic belief was completely off.

As of right now given the ongoing state of Firefox Nightly, it appears that Mozilla has abandoned practical compatibility with many old addons. With the preference set, Mozilla may allow Nightly to load them, but Mozilla won't necessarily keep them working; in fact, available evidence suggests that Mozilla has been cheerfully breaking old extension APIs left and right. The only old APIs that we can probably count on continuing to work are the APIs that Mozilla needs for their own use (the 'signed by Mozilla internally' status of legacy extensions in the wiki page), and it's clear that these APIs are not sufficient for many addons.

In particular, all of my addons remain in various states between broken and completely non-functional on Nightly. This situation has not been improving since the preference change landed; if anything they've been getting worse in more recent Nightly versions. Since this has persisted for weeks and weeks, I have to assume that Mozilla no longer cares about general legacy extensions on Nightly and they're either landing code improvements that have no support for them or are actively removing the code that supports legacy extension APIs. Or both. I can't blame Mozilla for this, since they've been saying for years now that they wanted to get rid of the old APIs and the old APIs were holding back Firefox development.

One implication of this is that Mozilla is now fairly strongly committed to their Firefox 57 extension plans, come what may. With legacy extensions broken in practice, Mozilla cannot simply re-flip the preference the other way to back out of the transition and return legacy extensions to operation. Nor do I think they have time to fix the code should they decide they want to. If I'm reading the Firefox release calendar correctly, we're about one or two weeks from Firefox Nightly transmuting into the Firefox 57 beta version, and then six weeks more until Firefox 57 is actually released.

The personal implication for me is that I've now reached the last Nightly version I can use, although it happened perhaps a month later than I thought it would way back when. Now that I look at the dates of things, I think my current Firefox 'Nightly' versions are actually before Firefox 56 branched off, so I should probably switch over to Firefox 56 and freeze on it. That way I'll at least get security fixes until November or so.

Firefox57OldAddonsState written at 01:54:58; Add Comment

2017-09-08

My view of the problem with Extended Validation TLS certificates

In a conversation on Twitter, I said:

EV isn't exactly a scam, but it is trying to (profitably) solve a problem that we don't actually know how to solve (and have failed at).

The main problem that plain old TLS certificates solve is making sure that you're talking to the real facebook.com instead of an imposter or a man in the middle. This is why they've been rebranded as 'Domain Validation (DV)' certificates; they validate the domain. DV certificates do this fairly well and fairly successfully; while there are ways to attack them, it's increasingly expensive and risky, and for various reasons the number of people hitting warnings and overriding them is probably going down.

The problem that Extended Validation TLS certificates are attempting to solve is that domain validation is not really sufficient by itself. You usually don't really care that you're talking to google.ca or amazon.com, you care that you're talking to Google or Amazon. In general people care about who (or what) they're connecting to, not what domain name it uses today for some reason.

(Mere domain validation also has issues like IDN homographs and domains called yourfacebooklogin.com.)

Unfortunately for EV certificates, this is a hard problem with multiple issues and we don't know how to solve it. In fact our entire history of trying to inform or teach people about web site security has been an abject failure. To the extent that we've had any meaningful success at all, it's primarily come about not through presenting information to people but by having the browser take away foot-guns and be more militant about not letting you do things.

There is no evidence that EV certificates as currently implemented in browsers do anything effective to solve this problem, and as Troy Hunt has written up there's significant anecdotal evidence that they do nothing at all. Nor are there any good ideas or proposals on the horizon to improve the situation so that EV certificates even come close to tackling the problem in the context where it matters.

Right now and for the foreseeable future what EVs deliver is math, not security. As math they provide you with what they claim to provide you, which makes them not exactly a scam but also not exactly useful. I'm sure the CAs would like for EV certificates to solve the problem they're nominally aimed at, but in the mean time the CAs are happy to take your money in exchange for some hand-curated bits.

Sidebar: Some general issues with what EV certificates are trying to do

First, as far as we know people don't think of who they're talking to in any conveniently legible form, like corporate ownership. We know what we mean by 'Facebook', 'Google', 'Amazon', and so on, but it can be very hard to map this to specific concrete things in the world. See Troy Hunt's saga for one example of translating a theoretically simple 'who this is' concept into something that was more or less legible to a certificate authority and came out more or less right and more or less understandable.

Second, we don't know how to present our theoretical 'who this site is' information to people in a way that they will actually notice, understand, and be able to use. Previous and current attempts to present this information in the browser in a form that people even notice, much less understand, have been abject failures.

Finally, we especially don't even know how to get people to even consider this issue. You see, I cheated in my description of the problem, because in reality people don't even think about who they're connecting most of the time. If it looks like Facebook and your browser isn't complaining, well, it probably is and you'll proceed. This is how people enter their Facebook credentials into 'yourfacebooklogin.com' (and we can't blame them for doing so, for any number of reasons).

(The final issue is a variant of the fundamental email phish problem.)

EVCertificateProblem written at 02:49:38; Add Comment

2017-08-25

The probable coming explosion of Firefox 57

Let's start with what I tweeted:

I don't think Mozilla understands how angry plenty of people are going to be when Firefox 57 is released and breaks many of their addons.

If you follow Firefox news, you already know what this is about; Firefox 57 is when Mozilla is going to turn off all old extensions, ones that are not WebExtensions (this is apparently different from being ready for Firefox Electrolysis, although all WebExtensions are okay with Electrolysis, or maybe there is something confusing going on).

Firefox Nightly builds recently switched over to disabling old extensions. Nominally this is just flipping the extensions.legacy.enabled preference from true to false and so I could flip it back, as I expected earlier this year. In practice all of my extensions immediately fell over, along with Nightly itself (it now hangs on startup if old extensions are enabled). Even the extensions that I thought were ready don't work now. Nightly is as Nightly does, and it was kind of broken for me in various ways even before this, but I consider this a pretty bad omen in general for the state of WebExtensions. According to Mozilla's release calendar we are less than three months away from the release of Firefox 57, and nothing works. Nor was there any sort of attempt at a graceful transition when I fired up the first version of Nightly with legacy addons turned off. I don't think I even got a warning.

(I don't know if this is because Firefox has no support for a graceful transition or because there are no WebExtensions versions of any of my addons, not even, say, NoScript, but neither option is a good sign.)

I'm pretty sure that Firefox has a great many current users who pay absolutely no attention to Firefox stuff due to complete lack of interest. These people are not up on news about WebExtensions and other such things, so they don't know about this change coming in Firefox 57. A certain number of these people use extensions extensively enough to create a new browser. As far as I can tell, these people are going to fire up their auto-updating Firefox install some time after November 13th and experience what is to them a completely different and much more stripped-down browser with what seems to be very little warning.

(Fedora is on Firefox 55 right now, and several of the extensions I have enabled in my testing Firefox aren't even Electrolysis compatible, much less are WebExtensions. I'm getting no warning nags about this on browser startup or any other time. Perhaps this is different on other platforms, but I suspect not.)

When these users get their browser yanked out from underneath them, they are going to be angry. And rightfully so; yanking people's browser out from underneath them is an extremely rude thing to do. Based on their behavior so far, I don't think Mozilla really gets this. The other interpretation is that Mozilla doesn't care if some number of people wind up angry, which I think is also a bad mistake. Mozilla seems to be so intent on going full speed ahead to WebExtensions and damn the torpedoes that they're ignoring a violent derailment coming up ahead of them.

By the way, one of the problems here is that the Firefox version of the WebExtensions API is incomplete in that any number of current extensions can't be (fully) implemented in it. You can track some of the depressing state of affairs by looking at arewewebextensionsyet. Remember, we're theoretically less than three months away from permanently turning off many of these extensions in their current state.

Firefox57ComingExplosion written at 21:59:11; Add Comment

2017-08-14

Chrome extensions are becoming a reason not to use Chrome

A couple of weeks ago, a reasonably popular Chrome extension was stolen and infested with adware. If you're familiar with Google, you know what happened next: nothing. As people sent up frantic smoke signals and attempted to recover or at least de-adware a popular extension, Google was its usual black hole self. Eventually, sufficient publicity appears to have gotten Google to do something, and they even did the right thing.

In the process of reading about this, I discovered a couple of things. First, this is apparently a reasonably common happening, either through attacks or just through buying a sufficiently popular extension and then quietly loading it down with adware and counting on Google to be Google. Second and more alarming, this has happened to an extension that I actually had installed, although I didn't have it enabled any more. Long ago, I installed 'User-Agent Switcher for Google Chrome' because it seemed vaguely like something I'd want to have around. Now, well, it's apparently a compromised extension. One that works quite hard to hide its actions, no less. I've said bad things about how Chrome extensions mutate themselves to add adware before, but at least back then this was being done by the extension authors themselves and they seemed to have relatively modest commercial goals. The extension compromises that happen now are active malware, and according to the news about the User-Agent switcher extension, you can't even file any sort of report to get Google's attention.

I'm not going to blame Google too much for making Chrome so popular that its extensions have become an attractive target for malware attackers. I am going to blame Google for everything else they do and don't do to contribute to the problem; the silent, forced extension auto-updates, the cultural view that a certain amount of malware is okay, the clearly ineffective review process for extensions (if there is any at all), and being a black hole when really bad stuff starts to happen. Google runs Chrome extensions with the same care and indifference that they handle abuse on everything else they do.

These days I only use Chrome to run Javascript, because it does that better than Firefox on Linux. But I do use some extensions there, and they're apparently all potential time bombs. I'm sure the author of uBlock Origin is taking precautions, but are they taking enough of them? There are likely plenty of attackers that would love to gain control over such a popular and powerful extension.

(The smarter attackers will target less visible extensions that still have a decent installed base. A uBlock Origin compromise would be so big a thing that it probably would get Google to act reasonably promptly. As the example of User-Agent Switcher shows, if you compromise a less-popular thing you can apparently stay active for quite some time.)

ChromeExtensionsDanger written at 00:35:57; Add Comment

2017-08-09

How encryption without authentication would still be useful on the web

In HTTPS is a legacy protocol, I talked about how we are stuck with encryption being tightly coupled to web site authentication then mentioned in an aside that they could be split apart. In a comment, Alexy asked a good question:

How could encryption be useful at all without authentication? Without authentication, any MITM (i.e. ISP) could easily pretend to be other side and happily encrypt the connection. And we would still get our ISP-induced ads and tracking.

The limitation that Alexy mentions is absolutely true; an encryption-only connection can still be MITMd, at least some of the time. Having encryption without authentication for web traffic is not about absolute security; instead it's about making things harder for the attacker and thus reducing how many attackers there are. Encrypting web traffic would do this in at least three ways.

First, it takes passive eavesdropping completely off the table; it just doesn't work any more. This matters a fair bit because passive eavesdropping is easy to deploy and hard to detect. If you force attackers (including ISPs) to become active attackers instead of passive listeners, you make their work much harder and more chancy in various ways. All by itself I think this makes unauthenticated encryption very useful, since passive eavesdropping is becoming increasingly pervasive (partly as it becomes less expensive).

(One of the less obvious advantages of passive eavesdropping is that you don't have to handle the full traffic volume that's going past. Sure, it would be nice to have a complete picture, but generally if you drop some amount of the traffic because your eavesdropping machine is too overloaded it's not a catastrophe. With active interception, at least some part of your system must be able to handle the full traffic volume or increasingly bad things start to happen. If you drop some traffic, that's generally traffic that's not getting through, and people notice that and get upset.)

Second, using encryption significantly raises the monetary costs of active MITM interception, especially large-scale interception. Terminating and initiating encrypted sessions takes a lot more resources (CPU, memory, etc) than does fiddling some bits and bytes in a cleartext stream as it flows through you. Anyone who wants to do this at an ISP's network speed and scale is going to need much beefier and more expensive hardware than their current HTTP interception boxes, which changes the cost to benefit calculations. It's also probably going to make latency worse and thus to slow down page loads and so on, which people care about.

Finally, in many situations it's probably going to increase the attacker's risks from active MITM interception and reduce how much they get from it. As an example, consider the case of Ted Unangst's site and me. I haven't accepted his new root CA, so in theory my HTTPS connection to his site is completely open to a MITM attack. In practice my browser has a memorized exception for his current certificate and if it saw a new certificate, it would warn me and I'd have to approve the new one. In a hypothetical world with encryption-only HTTP, there are any number of things that browsers, web sites, and the protocol could do to make MITM interception far more obvious and annoying to users (especially if browsers are willing to stick to their current hardline attitudes). This doesn't necessarily stop ISPs, but it does draw attention and creates irritated customers (and raises the ISP's support costs). And of course it's very inconvenient to attackers that want to be covert; as with HTTPS interception today, it would be fairly likely to expose you and to burn whatever means you used to mount the attack.

None of this stops narrowly targeted MITM interception, whether by your ISP or a sufficiently determined and well funded attacker. Instead, unauthenticated encryption's benefit is that it goes a long way towards crippling broad snooping on web traffic (and broad alterations to it), whether by your ISP or by anyone else. Such broad snooping would still be technically possible, but encryption would raise the costs in money, irritated customers, and bad press to a sufficient degree that it would cut a great deal of this activity off in practice.

EncryptionWithHTTPBenefit written at 01:36:21; Add Comment

2017-08-04

The problem with distributed authentication systems for big sites

In the comments on my entry on 'sign in with your Google/Facebook' authentication, people wished for a distributed cross-site web authentication system (of which there have been some number) but then lamented that nobody significant adopted any of them (as Anton Eliasson put it). As it happens, I think that there are good reasons for this disinterest by big sites beyond the obvious business ones.

The simple version is that when you allow your users to authenticate themselves using another site you put part of their experience with you in the hands of someone else, along with part of their account security. If you are a big site, some noticeable amount of your users will choose badly; they will use sites that are not reliable, that do not stay online forever, or are not secure. When bad things happen, those users will be unable to use your site (or at least have a much harder time of it) or their accounts get hacked and exploited. You will inevitably have to spend some amount of engineering resources building systems to deal with this, and then some amount of user support resources on dealing with people who run into these problems and can't fix them themselves. On top of that, a fair number of these users will probably blame you for their problems, even though they are not your fault in one sense.

(In another sense the problems are your fault. It is up to you to design your website so that people can use it, and if people find themselves locked out, that is your fault for allowing it to happen.)

When you run your own authentication system and require your users to use it, you own the full experience. You're entirely responsible for making authentication work well, but in exchange you're self-contained; you don't have to hope that anyone else is up and working right. When you are big and heavily engineered and almost everyone else is smaller and less well engineered, you may rationally feel that you're going to do a much better job on average than they are, with less downtime and problems. And if there are problems, you can troubleshoot the entire system yourself; you have end to end ownership of all components.

(As a corollary, if you're in this situation the very last thing you want to see is a bunch of your users all relying on the same outside authentication provider. The more of your users rely on a single outside provider, the more of an impact there will be if that provider stops working right. If a quarter of your users decide to authenticate with Yahoo and one day Yahoo is sold and turns off their authentication, you have a big problem. Unfortunately it's pretty likely that users will concentrate this way.)

Small sites that rely on big sites for user authentication face many of the same issues, but both the odds and the tradeoffs are different. It's pretty unlikely that Google's or Facebook's authentication systems will be broadly unavailable or malfunctioning, for instance. You can also cover most of your potential users by supporting only a few carefully chosen outside authentication sources, instead of potentially having a huge number of small ones (with various degrees of bugs or differences in how they view, eg, the OAuth spec).

(I'm talking here strictly about authentication, not about accounts and identity. Any serious site must own its own account system, because you cannot count on other people to keep spammers and abusers out of your system. To put it one way, Facebook is not going to close someone's account because they were nasty on your site.)

Sidebar: The business reasons matter too

Let's not pretend that business considerations don't matter, because they do for any commercial organization. To put it one way, when you allow authentication to be outsourced, you don't entirely 'own' all of your customers or users. Instead some of them are basically loaned to you from their authentication sources, and those sources might some day decide to end that loan. In an extreme case, all you have for those users is 'user X on service Y', and once service Y cuts you off you have no way of re-establishing a connection with those people so they can come back to your service.

DistributedWebAuthProblem written at 02:04:08; Add Comment

2017-08-02

Why I'll never pick the 'sign in with a Facebook or Google account' option

Recently I read Mike Hearn's Building account systems (via), where he strongly recommends that people not build an account system themselves but instead outsource it to Facebook and Google via OAuth. When I read that, I winced; not just at the idea of having 'sign in with ...' as my only option, but also because Mike Hearn's arguments here are actually solid ones. As he covers, it is a lot of hard work to build a good web account system and you will probably never be able to do it as well as Google and Facebook can.

I have any number of moderate problems with big-site OAuth, like how it gives Google and Facebook more information on my activities than I like (information they don't normally get). But this is not the core reason why I assiduously avoid 'sign in with ...' options. The core reason is that when I sign in with OAuth, my Facebook or Google account becomes a central point of losing access to many things. If Google or Facebook decide that they don't like me any more and suspend my account (or lock me out of it), I've probably lost access to everything authenticated through OAuth using that account. If I had to use 'sign in with ...', that could be any number of things that I care very much about (for example), far more than I care about my Google or Facebook account.

Facebook is far more dangerous here. Google generally doesn't seem to care if you have multiple accounts, while Facebook wants you to have only one and may suspend it if they decide that you're using a fake name. It's nominally possible to make a separate Google account for each site that demands you sign in with Google; it's not with Facebook as far as I know, at least within their Terms of Service.

(The other issue, as seen in an interaction with LinkedIn, is that using these sites as OAuth sources requires agreeing to their TOS as well as the TOS for the site you really care about. But then, everyone ignores TOSes anyway because if we didn't we'd all go mad.)

I have never personally been locked out of my Google or Facebook account (although I did worry about G+ stuff before the Google Reader shutdown). However, on a global scale it happens to plenty of people (anguished stories about it show up periodically in the usual circles), and I actually know someone who is currently locked out of their GMail account and is rather unhappy about it. As a result, I very much want to separate out all of my online accounts and I basically insist on it. So for entirely selfish reasons I certainly hope that web sites don't listen to Mike Hearn here.

NoOAuthLoginsForMe written at 01:34:07; Add Comment

2017-07-31

Modern web page design and superstition

In yesterday's entry I said some deeply cynical things about people who design web pages with permanently present brand headers and sharing-links footers (or just permanent brand-related footers in general). I will condense these cynical things to the following statement:

Your page design, complete with its intrusive elements and all, shows what you really care about.

As the logic goes, if you actually cared about the people reading your content, you wouldn't have constantly present, distracting impediments to their reading. You wouldn't have things that got in the way or obscured parts of the text. If you do have articles that are actually overrun with branding and sharing links and so on, the conclusion to draw is the same as when a page of writing on a 'news' site is overrun by a clutter of ads. In both cases, the content is simply bait and the real reason the page exists is the ads or the branding.

Although it might be hard to believe, I'm actually kind of an optimist. So my optimist side says that while this cynical view of modern page design is plausible, I don't think it's universally true. Instead I think that what is going on some of the time is a combination of blindness and superstition. Or to put it concretely, I believe that most people putting together page design don't do it from first principles; instead, much as with programming, most people copy significant design elements from whatever web page design trend is currently the big, common thing.

(This includes both actual web designers and people who are just putting together some web pages. The latter are much more likely to just copy common design elements for obvious reasons.)

Obviously you don't copy design elements that you have no use for, but most people do have an interest in social media sharing and have some sort of organization or web site identity even if it's not a 'brand' as such (just 'this is the website of <X>' is enough, really). Then we have the massive design push in this direction from big, popular content farm sites that are doing this for entirely cynical reasons, like Medium. You see a lot of big web sites doing this, it's at least more or less applicable to you (and may help boost your writing and site, and who doesn't want that), so you replicate these permanent headers and footers in your site and your designs because it's become just how sites are done. In some cases, it may be made easier due to things like canned design templates that either let you easily turn these on or simply come with them already built in (no doubt partly because that's what a lot of people ask for). Neither you nor other people involved in this ever sit down to think about whether it's a good idea; it's enough that it's a popular design trend that has become pretty much 'how pages should look on modern sites'.

(I'm sure there's a spectrum running between the two extremes. I do drop by some websites where I suspect that social media shares are part of what keeps the site going but I also believe that the person running the site is genuinely well-intentioned.)

I consider this the optimistic take because means I don't have to believe a fairly large number of people are deeply cynical and are primarily writing interesting articles and operating websites in order to drive branding. Instead they do care about what they seem too and are just more or less reflexively copying from similar sites, perhaps encouraged by positive results for things like social media sharing.

PageDesignAndSuperstition written at 01:33:54; Add Comment

(Previous 10 or go back to July 2017 at 2017/07/30)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.