Wandering Thoughts archives

2016-04-30

A story of the gradual evolution of network speeds without me noticing

A long time ago I had a 28.8Kbps dialup connection running PPP (it lasted a surprisingly long time). A couple of times I really needed to run a graphical X program from work while I was at home, so I did 'ssh -X work' and then started whatever program it was. And waited. And waited. Starting and using an X program that is moving X protocol traffic over a 28.8K link gives you a lot of time to watch the details of how X applications paint their windows, and it teaches you patience. It's possible, but it's something you only really do in desperation.

(I believe one of the times I did this was when I really needed to dig some detail out of SGI's graphical bug reporting and support tool while I was at home. This was back in the days before all of this was done through the web.)

Eventually I finally stepped up to DSL (around this time), although not particularly fast DSL; I generally got 5000 Kbps down and 800 Kbps up. I experimented with doing X over my DSL link a few times and it certainly worked, but it still wasn't really great. Simple text stuff like xterm (with old school server side XLFD fonts) did okay, but trying to run something graphical like Firefox was still painful and basically pointless. At the time I first got my DSL service I think that 5/.8 rate was pretty close to the best you could get around here, but of course that changed and better and better speeds became possible. Much like I stuck with my dialup, I didn't bother trying to look into upgrading for a very long time. More speed never felt like it would make much of a difference to my Internet experience, so I took the lazy approach.

Recently various things pushed me over the edge and I upgraded my DSL service to what is about 15/7.5 Mbps. I certainly noticed that this made a difference for things like pushing pictures up to my Flickr, but sure, that was kind of expected with about ten times as much upstream bandwidth. Otherwise I didn't feel like it was any particular sea change in my home Internet experience.

Today I updated my VMWare Workstation install and things went rather badly. I'd cleverly started doing all of this relatively late in the day, I wound up going home before VMWare had a chance to reply to the bug report I filed about this. When I got home, I found a reply from VMWare support that, among other things, pointed me to this workaround. I applied the workaround, but how to test it? Well, the obvious answer was to try firing up VMWare Workstation over my DSL link. I didn't expect this to go very well for the obvious reasons; VMWare Workstation definitely is a fairly graphical program, not something simple (in X terms) like xterm.

Much to my surprise, VMWare Workstation started quite snappily. In fact, it started so fast and seemed so responsive that I decided to try a crazy experiment: I actually booted up one of virtual machines. Since this requires rendering the machine's console (more or less embedded video) I expected it to be really slow, but even this went pretty well.

Bit by bit and without me noticing, my home Internet connection had become capable enough to run even reasonably graphically demanding X programs. The possibility of this had never even crossed my mind when I considered a speed upgrade or got my 15/7.5 DSL speed upgrade; I just 'knew' that my DSL link would be too slow to be really viable for X applications. I didn't retest my assumptions when my line speed went up, and if it hadn't been for this incident going exactly like it did I might not have discovered this sea change for years (if ever, since when you know things are slow you generally don't even bother trying them).

There's an obvious general moral here, of course. There are probably other things I'm just assuming are too slow or too infeasible or whatever that are no longer this way. Assumptions may deserve to be questioned and re-tested periodically, especially if they're assumptions that are blocking you from nice things. But I'm not going to be hard on myself here, because assumptions are hard to see. When you just know something, you are naturally a fish in water. And if you question too many assumptions, you can spend all of your time verifying that various sorts of water are still various sorts of wet and never get anything useful done.

(You'll also be frustrating yourself. Spending more than a small bit of your time verifying that water is still wet is not usually all that fun.)

HomeInternetSpeedChanges written at 02:18:23; Add Comment

2016-04-26

How 'there are no technical solutions to social problems' is wrong

One of the things that you will hear echoing around the Internet is the saying that there are no technical solutions to social problems. This is sometimes called 'Ranum's Law', where it's generally phrased as 'you can't fix people problems with software' (cf). Years ago you probably could have found me nodding along sagely to this and full-heartedly agreeing with it. However, I've changed; these days, I disagree with the spirit of the saying.

It is certainly true you cannot outright solve social problems with technology (well, almost all of the time). Technology is not that magical, and the social is more powerful than the technical barring very unusual situations. And in general social problems are wicked problems, and those are extremely difficult to tackle in general. This is an important thing to realize, because social problems matter and computing has a great tendency to either ignore them outright or assume that our technology will magically solve them for us.

However, the way that this saying is often used is for technologists to wash their hands of the social problems entirely, and this is a complete and utter mistake. It is not true that technical measures are either useless or socially neutral, because the technical is part of the world and so it basically always affects the social. In practice, in reality, technical features often strongly influence social outcomes, and it follows that they can make social problems more or less likely. That social problems matter means that we need to explicitly consider them when building technical things.

(The glaring example of this is all the various forms of spam. Spam is a social problem, but it can be drastically enabled or drastically hindered by all sorts of technical measures and so sensible modern designers aggressively try to design spam out of their technical systems.)

If we ignore the social effects of our technical decisions, we are doing it wrong (and bad things usually ensue). If we try to pretend that our technical decisions do not have social ramifications, we are either in denial or fools. It doesn't matter whether we intended the social ramifications or didn't think about them; in either case, we may rightfully be at least partially blamed for the consequences of our decisions. The world does not care why we did something, all it cares about is what consequences our decisions have. And our decisions very definitely have (social) consequences, even for small and simple decisions like refusing to let people change their login names.

Ranum's Law is not an excuse to live in a rarefied world where all is technical and only technical, because such a rarefied world does not exist. To the extent that we pretend it exists, it is a carefully cultivated illusion. We are certainly not fooling other people with the illusion; we may or may not be fooling ourselves.

(I feel I have some claim to know what the original spirit of the saying was because I happened to be around in the right places at the right time to hear early versions of it. At the time it was fairly strongly a 'there is no point in even trying' remark.)

SocialProblemsAndTechnicalDecisions written at 23:50:13; Add Comment

2016-04-20

A brief review of the HP three button USB optical mouse

The short background is that I'm strongly attached to real three button mice (mice where the middle mouse button is not just a scroll wheel), for good reason. This is a slowly increasing problem primarily because my current three button mice are all PS/2 mice and PS/2 ports are probably going to be somewhat hard to find on future motherboards (and PS/2 to USB converters are finicky beasts).

One of the very few three button USB mice you can find is a HP mouse (model DY651A); it's come up in helpful comments here several times (and see also Peter da Silva). Online commentary on it has been mixed with some people not very happy with it. Last November I noticed that we could get one for under $20 (Canadian, delivery included), so I had work buy me one; I figured that even if it didn't work for me, having another mouse around for test machines wouldn't be a bad thing. At this point I've used it at work for a few months and I've formed some opinions.

The mouse's good side is straightforward. It's a real three button USB optical mouse, it works, and it costs under $20 on Amazon. It's not actually made by HP, of course; it turns out to be a lightly rebranded Logitech (xinput reports it as 'Logitech USB Optical Mouse'), which is good because Logitech made a lot of good three button mice back in the days. There are reports that it's not durable over the long term but at under $20 a pop, I suggest not caring if it only lasts a few years. Buy spares in advance if you want to, just in case it goes out of production on you.

(And if you're coming from a PS/2 ball mouse, modern optical mouse tracking is plain nicer and smoother.)

On the bad side there are two issues. The minor one is that my copy seems to have become a little bit hair trigger on the middle mouse button already, in that every so often I'll click once (eg to do a single paste in xterm) and X registers two clicks (so I get things pasted twice in xterm). It's possible that this mouse just needs a lighter touch in general than I'm used to. The larger issue for me is that the shape of the mouse is just not as nice as Logitech's old three button PS/2 mice. It's still a perfectly usable and reasonably pleasant mouse, it just doesn't feel as nice as my old PS/2 mouse (to the extent that I can put my finger on anything specific, I think that the front feels a bit too steep and maybe too short). My overall feeling after using the HP mouse for several months is that it's just okay instead of rather nice the way I'm used to my PS/2 mouse feeling. I could certainly use the HP mouse; it's just that I'd rather use my PS/2 mouse.

(For reasons beyond the scope of this entry I think it's specifically the shape of the HP mouse, not just that it's different from my PS/2 mouse and I haven't acclimatized to the difference.)

The end result is that I've switched back to my PS/2 mouse at work. Reverting from optical tracking to a mouse ball is a bit of a step backwards but having a mouse that feels fully comfortable under my hand is more than worth it. I currently plan to keep on using my PS/2 mouse for as long as I can still connect it to my machine (and since my work machine is unlikely to be upgraded any time soon, that's probably a good long time).

Overall, if you need a three button USB mouse the HP is cheap and perfectly usable, and you may like its feel more than I do. At $20, I think it's worth a try even if it doesn't work out; if nothing else, you'll wind up with an emergency spare three button mouse (or a mouse for secondary machines).

(And unfortunately it's not like we have a lot of choice here. At least the HP gives us three button people an option.)

HP3ButtonUSBMouseReview written at 23:43:54; Add Comment

2016-03-21

Current PC technology churn that makes me reluctant to think about a new PC

My current home computer will be five years old this fall. Five years is a long enough time in the PC world that any number of people would be looking to replace something that old. I'm not sure I am, though, and one of the reasons for that is that I at least perceive the PC world as being in the middle of a number of technology shifts, ones where I'd rather wait and buy a new machine after all the dust has settled. However, I may be wrong about this; I haven't exactly been paying close attention to the world of PC technology (partly because of my self-reinforcing impression that it's in churn now).

The first point of churn is in SSDs. It seems clear that SSDs are the future of a lot of storage, and it also seems clear that they're busy evolving and shaking out at present. We have ever larger SSDs becoming ever more affordable and on top of that there's coming changes in how you want to connect SSDs to your system. It seems quite likely that things will look rather different in the SSD world in a few years. I expect growing SSD popularity to affect both motherboards and cases, although that may be well under way by now.

The next point of churn, for me, is high DPI displays or more exactly the degree of graphics that I'm going to need to drive one, what sort of connectors it will need, and so on. I think the 'what connector' answer is some version of DisplayPort and the 'what resolution' is probably 3840 by 2160 (aka 4K UHD); I'd like something taller, but everyone seems to have converged on 16:9. On the other hand, this may be last year's answer and next year will bring higher resolution at affordable prices. Certainly the hardware vendors like improvements because improvements sell you things. In addition, the longer I wait the more likely it is that open source graphics drivers will support cards that can drive these displays (cf my long standing worry here).

Finally, there's the issue of RAM and ECC. One part of this is that I have a certain amount of hope that ECC will become more widely available in Intel chipsets and CPUs. Another part of it is that Rowhammer may cause changes in the memory landscape over the next few years. There are claims that the latest generation DDR4 RAM mitigates Rowhammer, but then there are also things like this Third I/O paper [pdf] that have reproduced Rowhammer with some DDR4 modules. Worrying very much about Rowhammer may be overthinking things, but there I go.

(I'd certainly like that any new machine wouldn't be susceptible to a known issue, but that may be asking too much.)

(There are other factors behind my somewhat irrational desire to not put together a new PC, but that's for another entry. Especially since I'm probably wrong about at least one of them.)

PCTechnologyChurn2016 written at 23:53:11; Add Comment

2016-03-09

Some thoughts on ways of choosing what TLS ciphers to support

As you might expect, the TLS DROWN attack has me looking at our TLS configurations with an eye towards making them more secure. In light of DROWN I'm especially looking at our non-HTTPS services, the most prominent of these being IMAP. As part of this I've been thinking about various approaches to deciding what TLS ciphers to support or disallow.

(I'm going to use 'cipher' somewhat broadly here; I really mean what often gets called 'cipher suites'.)

There's a number of different principles for picking what cipher suites to support that one could follow here:

  • Don't change anything and leave it to the software defaults. This seems like a bad option in today's Internet world unless you make sure to run the latest TLS software written by people who aggressively disable ciphers (at least by default). TLS risks and attacks just move too fast and TLS library defaults are often set conservatively (or simply not changed from historical practice, even when the threats change and some cipher suites become terrible ideas).

    The good news is that modern TLS libraries generally have disabled at least some terrible ideas, like SSLv2 and export grade ciphers. So some progress is being made here.

  • Disable only known to be broken cipher suite options. Today I believe that this is purely RC4 or MD5 based cipher suites (plus terrible things like all export grade ciphers, SSLv2, etc). These are sufficiently dangerous that any client with nothing better is probably better off failing entirely.

    (Certainly we don't want user passwords traveling to us over TLS encryption that's this weak, never mind any other protocol vulnerabilities that they may expose.)

  • Disable obscure, rarely used cipher suites as well as known broken ones. Today I believe that would add at least SEED and CAMELLIA based cipher suites. I've certainly seen various sites that advise doing this.

    (In some environments I believe that DES/3DES cipher suits would also be disabled here.)

  • Set your cipher suites to the recommended best settings from eg Mozilla or other well regarded sources. These people have hopefully already worked out both what the good state of the art is as well as what tradeoffs you need to be broadly compatible with existing clients. You may need to supplement this with additional cipher suites if you have unusual clients.

    (Mozilla is great for HTTPS services but I'm not sure they've looked at, say, the TLS support levels in various IMAP clients. And current versions of web browsers are probably more up to date than clients for other TLS services.)

  • Track your actual client TLS usage and then set your cipher suites to what your users actually need, plus the latest high security ciphers so that new clients or upgraded ones can get the best security possible. The drawback here is making sure you really know all of the cipher suites that your users are using; you may find that people keep turning up with uncommon clients that need new cipher suites added.

Any time you explicitly set the list of TLS cipher suites to use (as opposed to disabling some from the default list), you become responsible for updating this list as new versions of TLS libraries add support for new and hopefully better cipher suites. Sometimes you can be a bit generic in your explicit settings; sometimes sites have you set an explicit list of specific cipher suites. Just disabling things is probably more future proof, although the ice may be thin here.

I don't currently have any opinions on what approach is the best one. Really, I don't think there is a single 'best one'; it depends on your circumstances.

TLSChoosingCipherSets written at 01:03:26; Add Comment

2016-02-22

The university's coordination problem

In response to my entry on how we can't use Let's Encrypt in production, Jack Dozier left a comment asking if we'd looked into InCommon's Certificate Service. InCommon is basically a consortium of US educational institutions that have gathered together to, among other things, create a flat cost CA service; apparently, for $15k US or so a year, your university can get all the certificates you want (including for affiliated organizations). This sounds great, but at least here it exposes what I'm going to call the university coordination problem.

Put simply, suppose that the university spent $15k a year to get 'all you want' certificates. More specifically, this would be the central IT services group. Now, how does the central IT group get the news out to everyone here that you can get free certificates through this program?

The University of Toronto is a big place, which means that there are a dizzying number of departments, research groups, professors, and various other people who could possibly be buying TLS certificates for something they're doing. Many of these people do not deal with IT issues like TLS certificates on an ongoing basis, so they're extremely unlikely to remember the existence of a service they might have gotten an email blast about half a year ago.

(And I guarantee that if you sent that email blast to professors, most of them deleted it unread.)

Nor is there a central place where money gets spent that you can set up as a chokepoint. I mean, yes, there is a complicated university wide purchasing department, but no one sane is going to make people get pre-approval from purchasing for, say, twenty dollar expenses. The entire university would grind to a halt if you tried that (followed immediately by a massive revolt by basically everyone). TLS certificates are well under the preapproval cost threshold, so in practice people purchase most of them through university credit cards.

In theory CAs themselves might serve as a roadblock, by requiring approval from the owner of the overall university domain. In practice I believe that many CAs will issue TLS certificates if you can simply prove ownership of the subdomain you want the certificate for. CAs have an obvious motivation to do this if they can get away with it, since it means that more people are likely to buy certificates from them.

(In general, vendors of things are highly motived to let little departments and groups buy things without the involvement of any central body, because involving central things in a big company invariably slows down and complicates the process. You really want some person in some group to just be able to put your product or service on their corporate credit card, at least initially.)

This is not an issue that's unique to TLS certificates. It's a general issue that applies to basically anything relatively inexpensive that the university might arrange some sort of a site license for. The real challenge is often not buying the site license, it's insuring that it will get widely used, and the issue there is simply getting the news out and coordinating with all of the potential users. Some products are pervasive enough or expensive enough that people will naturally ask 'do we have some sort of central licensing for this', but a lot of them are not that way. And you can be surprised about even relatively expensive products.

(For that matter, I suspect that this issue comes up for things that are expensive but uncommon. For instance, we have a site license for a relatively expensive commercial anti-spam system, but I suspect that many people running mail systems here don't know about it, even if it would be useful to them.)

PS: This problem is probably not unique to universities but is shared at least in part by any sufficiently large organization. However, I do think that universities have some features that make it worse, like less central control over money.

UniversityCoordinationProblem written at 02:16:03; Add Comment

2016-02-10

The fundamental practical problem with the Certificate Authority model

Let's start with my tweet:

This is my sad face when people sing the praises of SSH certificates and a SSH CA as a replacement for personal SSH keypairs.

There is nothing in specific wrong with the OpenSSH CA model. Instead it simply has the fundamental problem of the basic CA model.

The basic Certificate Authority model is straightforward: you have a CA, it signs things, and you accept that the CA's signature on those things is by itself an authorization. TLS is the most widely known protocol with CAs, but as we see here the CA model is used elsewhere as well. This is because it's an attractive model, since it means you can distribute a single trusted object instead of many of them (such as TLS certificates or SSH personal public keys).

The fundamental weakness of the CA model in practice is that keeping the basic CA model secure requires that you have perfect knowledge of all keys issued. This is provably false in the case of breaches; in the case of TLS CAs, we have repeatedly seen CAs that do not know all the certificates they mis-issued. Let me repeat that louder:

The fundamental security requirement of the basic CA model is false in practice.

In general, at the limits, you don't know all of the certificates that your CA system has signed nor do you know whether any unauthorized certificates exist. Any belief otherwise is merely mostly or usually true.

Making a secure system that uses the CA model means dealing with this. Since TLS is the best developed and most attacked CA-based protocol, it's no surprise that it has confronted this problem straight on in the form of OCSP. Simplified, OCSP creates an additional affirmative check that the CA actually knows about a particular certificate being used. You can argue about whether or not it's a good idea for the web and it does have some issues, but it undeniably deals with the fundamental problem; a certificate that's unknown to the CA can be made to fail.

Any serious CA based system needs to either deal with this fundamental practical problem or be able to explain why it is not a significant security exposure in the system's particular environment. Far too many of them ignore it instead and opt to just handwave the issue and assume that you have perfect knowledge of all of the certificates your CA system has signed.

(Some people say 'we will keep our CA safe'. No you won't. TLS CAs have at least ten times your budget for this and know that failure is a organization-ending risk, and they still fail.)

(I last wrote about this broad issue back in 2011, but I feel the need to bang the drum some more and spell things out more strongly this time around. And this time around SSL/TLS CAs actually have a relatively real fix in OCSP.)

Sidebar: Why after the fact revocation is no fix

One not uncommon answer is 'we'll capture the identifiers of all certificates that get used and when we detect a bad one, we'll revoke it'. The problem with this is that it is fundamentally reactive; by the time you see the identifier of a new bad certificate, the attacker has already been able to use it at least once. After all, until you see the certificate, identify it as bad, and revoke it, the system trusts it.

CAFundamentalProblem written at 02:12:58; Add Comment

2016-02-07

Your SSH keys are a (potential) information leak

One of the things I've decided I want to do to improve my SSH security is to stop offering my keys to basically everything. Right now, I have a general keypair that I use on most machines; as a result of using it so generally, I have it set up as my default identity and I offer it to everything I connect to. There's no particular reason for this, it's just the most convenient way to configure OpenSSH.

Some people will ask what the harm is in offering my public key to everything; after all, it is a public key. Some services even publish the public key you've registered with them (Github is one example). You can certainly cite CVE-2016-0777 here, but there's a broader issue. Because of how the SSH protocol works, giving your SSH public key to someone is a potential information leak that they can use to conduct reconnaissance against your hosts.

As we've seen, when a SSH client connects to a server it sends the target username and then offers a series of public keys. If the current public key can be used to authenticate the username, the server will send back a challenge (to prove that you control the key); otherwise, it will send back a 'try the next one' message. So once you have some candidate usernames and some harvested public keys, you can probe other servers to see if the username and public key are valid. If they are valid, the server will send you a challenge (which you will have to fail, since you don't have the private key); if they are not, you will get a 'try the next one' message. When you get a challenge response from the server, you've learned both a valid username on the server and a potential key to target. In some situations, both of these are useful information.

(If the server rejects all your keys, it could be either that none of them are authorized keys for the account (at least from your IP) or that the username doesn't even exist.)

How do people get your SSH public keys if you offer them widely? Well, by getting you to connect to a SSH server that has been altered to collect and log all of them. This server could be set up in the hopes that you'll accidentally connect to it through a name typo, or it could simply be set up to do something attractive ('ssh to my demo server to see ...') and then advertised.

(People have even set up demonstration servers specifically to show that keys leak. I believe this is usually done by looking up your Github username based on your public key.)

(Is this a big risk to me? No, not particularly. But I like to make little security improvements every so often, partly just to gain experience with them. And I won't deny that CVE-2016-0777 has made me jumpy about this area.)

SSHKeysAreInfoLeak written at 03:25:21; Add Comment

2016-01-10

Updating software to IPv6 is often harder than you might think

A while back D.J. Bernstein wrote was is now a famous rant about IPv6. Due to various things, this DJB article is on my mind and today I want to talk about one part of it that DJB casually handwaves, which is updating all software to support IPv6.

The obvious problem with software is that most of the traditional system APIs have specified IP addresses in fixed-size objects and with explicit, fixed types. Very little software has been written using generic APIs and variable-sized addresses, where you could just drop the bigger IPv6 addresses in without trouble; instead a lot of software knows that it is talking IPv4 with addresses that take up 4 bytes. Such software cannot just be handed IPv6 addresses, because they overflow the space and various things would malfunction. Instead systems have been required to define an entirely new and larger 'address family' for IPv6 and then software has had to be updated to support it along side IPv4.

The first complication emerges here: not only do you need a new address family, you need new APIs that can accept and return the new address family. Sometimes you need the new APIs because old APIs were defined only as returning 4-byte IPv4 addresses; sometimes you need new APIs because tons of people wrote tons of code that just assumed old APIs only ever return 4-byte IPv4 addresses.

(You could break all that code, but that would be a recipe for a ton of bugs for years. Let's not go there.)

But the larger problem is that IP addresses don't confine themselves just to the networking layer of programs where they get handled by generic system APIs that you can make cope with the new IPv6 addresses. Instead, in many important programs IP(v4) addresses ripple through all sorts of other code. This code may represent them in all sorts of ways and it may do all sorts of things to manipulate them, things that 'know' various facts that are only true for IPv4 addresses. For instance, I have several sets of code in several different languages that know how to make DNS blocklist queries given an 'IP address'. Depending on the language, it may split a string at every '.' or it may take four bytes in a specific order from a 32-bit integer, or even a byte addr[4] array.

(Some of this code may be at some distance from actual network programs. Consider code that attempts to process web server log files and do things like aggregate traffic by network regions, or even just tell when the logs have IP addresses instead of hostnames.)

All of this code needs to be revised for IPv6. Some of the revisions are simple. Others take more work and need to know things about, say, the typical and canonical ways of representing IPv6 addresses. Other code may need to be completely rethought from the ground up; for example, I have code that represents IP address ranges as pairs of '(start, end)' integers and supports various operations on them, including 'give me all of the IP addresses in this range set'. This works fine for IPv4 addresses, but the entire data structure may need to be totally redone for IPv6 and certain parts of the API might not make sense any more.

(And then there's the cases where IP addresses are stored in files and retrieved later. They are probably not being stored in large or arbitrary sized fields, so how they are stored may not be able to store IPv6 addresses. So we're looking at database restructuring here, and also restructuring of field validators and so on.)

Then you have all of the stuff that knows how to talk about IP addresses, for example in configuration files for programs. Much of this is likely specific to IPv4 addresses, so both code and specifications will need to be revised for IPv6 addresses. In turn this may ripple through to cause difficulties or require changes to the configuration file language; you may need to make IPv6 addresses accepted with some sort of quoting if your language treats ':' in words specially, for example. All of this involves far more than mechanical code changes and code updates; we're well at the level of system architecture and design, with messy tradeoffs between backwards compatibility and well supported IPv6 addresses.

(Exim famously has a certain amount of heartburn with lists of IPv6 addresses in its configuration files because long ago ':' was chosen as the default list separator character.)

Of course, IP addresses are just the start of the problem; it spirals off in several directions from them. One direction is IPv6 netblocks and address ranges; there's kind of a new syntax there, and people have to rethink configuration files that currently designate ranges via syntax like '127.100.0.'. Another one is that there are various special sorts of IPv6 addresses that your systems may need to be aware of, like link-local addresses. A third is the broad issue of per-source ratelimits; a simple limit that's per IPv6 address may not work very well in an IPv6 environment where people have relatively large subnets pushed down to their home connections or whatever.

All of this can be done, but it all adds up to a significant amount of work, both in raw programming and in design and architecture to make the right decisions about how systems should look and work in an IPv6 enabled environment. It should be no surprise that progress has been slow overall (and occasionally buggy) and people continue to design, build, and hack together systems that are implicitly or explicitly IPv4 only.

(If you only have to deal with IPv4 today, some of the high level issues may be effectively invisible to you.)

(I've written other stuff about the problems I see with DJB's IPv6 migration ideas in earlier entries.)

IPv6SoftwareUpdatePain written at 03:30:04; Add Comment

2016-01-02

I've realized I need to change how I read Twitter

People talk a lot about modern social web sites being designed to be sticky, to encourage you to keep them open and to interact with them all of the time. For a long time, this was not my experience with them; I felt no reason to stick around Facebook, for example. Then I got on Twitter with a fortuitous choice of client.

I am the kind of person who has historically had a 'gotta read them all' mindset about, well, basically everything on the Internet that I've wound up following (and Usenet, too). If you are this kind of person, Twitter is a terrible trap (especially with a client that lets you see read versus unread tweets). Once you follow enough people, there will be new tweets to read more or less every refresh interval. I may not read them right away, but there they are, tugging at my attention, and they will take time to read eventually. And of course there's the ever present temptation to take a (nominally short) break by reading some of the pending tweets.

For me, the inevitable result of following Twitter in my current way is a fractured attention and a slow but constant drain of time away from other things. It's especially pernicious because it doesn't feel like much time, since individual bursts of reading may be short. But the cumulative effect adds up and adds up.

(This should not be surprising, and really it isn't. We've long known the effects of breaking concentration and how little interruptions can have outsized effects; people write about various aspects of this all the time, and I've read them and nodded along with it. Yet here I was, quietly walking into doing exactly this to myself. One can draw various lessons here.)

At a one level, how I need to treat Twitter is straightforward. Rather than seeing it as something that I read all of, I need to treat it as a stream that I dip my toe into every so often (and only every so often). At another level, there's a vast difference between knowing a theoretical answer and being able to change my habits to carry it out in practice. It's going to take me time to work out how to do this in a way that works for me, and willpower to not keep backsliding into old 'read it all' habits.

(And I'll miss reading all of my Twitter feed; there's really nice stuff there that I enjoyed following. That what makes it hard, that I know I'm going to be missing things that I want to read.)

It's been quite interesting to be sucked into Twitter this way, bit by bit, and then realize that I was being pulled in and working out what the effects on me were. I have some views on why Twitter worked on me where other 'social web' sites haven't, but that's going to be another entry.

TwitterBreakingAddiction written at 01:07:47; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.