Wandering Thoughts

2020-10-04

Solid state disks in mirrors and other RAID setups, and wear lifetimes

Writing down my plans to move to all solid state disks on my home machine, where I don't have great backups, has made me start thinking about various potential issues that this shift might create. One of them is specific to how I'm going to be using my drives (and how I'm already using SSDs), which is in mirrored pairs and more generally in a RAID environment.

The theory of using mirrored drives is that it creates redundancy and gives you insurance against single disk drive failures. When you mirror hard drives, one of the things you are tacitly counting on is that most hard drive failures seem to be random mechanical or physical media failures (ie, the drive suffers a motor failure or too many bad spots start cropping up on the platters). Because these are random failures, the odds are very good that they won't happen on both drives at the same time.

Solid state drives are definitely subject to random failures from things like (probable) manufacturing defects. We've had some SSDs die very early in their lifetimes, and there are a reasonable number of reports that SSDs are subject to infant mortality (people might find A Study of SSD Reliability in Large Scale Enterprise Storage Deployments [PDF] to be interesting on this topic, among others). However, solid state drives also have a definite maximum lifetime based on total writes. Drives in a mirrored setup (or more generally in any RAID one) are likely to see almost exactly the same amount of writes over time, which means that they will reach their wear lifetimes at almost the same time.

If your solid state drives reach their wear lifetimes at all in your RAID array (and you put them into the array at the same time, which is quite common), it seems very likely that they will reach that lifetime at about the same time. If you have good monitoring and reporting on wear (and if the drives report wear honestly), this means you'll start wanting to replace them at about the same time. If they don't report wear honestly and just die someday, the odds of nearly simultaneous failures are perhaps uncomfortably high.

There are two reasons this may not be a real worry in practice. The first is that it seems unusual (and hard) in practice to reach even the official nominal wear lifetimes of SSDs, much less the real ones (which historically seem to have been much higher than the datasheet numbers when people have tested to destruction). The second is that A Study of SSD Reliability in Large Scale Enterprise Storage Deployments specifically says that you should worry more about infant mortality getting multiple drives at once, since their data says (enterprise) solid state storage has a significantly extended infant mortality period.

(You can also deal with wear concerns by throwing one or some of your RAID drives into a test setup to get written to a lot before you spin up the real RAID array, so that they should reach any wear lifetime a TB or three ahead of your other drives. This might or might not affect infant mortality in any useful way.)

SSDsRAIDWearWorry written at 01:12:28; Add Comment

2020-10-03

A thought about the lifetimes of hard disks and solid state disks

At work, a related group to ours just had a SSD start reaching its official total writes lifetime, and this sparked a thought (especially when combined with my goal of moving away from hard drives on my own machine).

On the one hand, it's increasingly conventional wisdom that modern solid state drives (both SSD and NVMe) have a longer expected lifetime on average than hard drives. On the other hand, you can get lucky and have individual hard drives that just keep on going and going and going, or at least this was the historical thing. Since solid state drives have a pretty strong lifetime limit for the total amount of writes that you can do to them, you'll never get lucky this way with an actively used SSD in the way that you could with a HD. All of your SSDs will definitely die sooner or later; the only question is when. You will never have a 'lucky 20 year old well used but still running' SSD.

The spoiler in this thought is that modern hard drives may not be able to reach this sort of lucky long durability for various reasons. An obvious one would be if they contain their own flash memory that's written to often enough to wear out after a certain write volume. Another would be if they're now built with materials and technology that has the same definite and irreversible decay over time that flash memory does (where you just can't get lucky).

Hardware with a definite lifetime has potential impacts on efforts to preserve systems and keep them operating for historical purposes over the long term. People have done impressive things to get very old systems from the 1960s, 1970s, and so on running so that we can actually see these influential and historical systems for ourselves. Probably that will be as possible in the future, though, since most of those historical systems that people are bringing back haven't been running continuously since then.

Some grumpy sysadmins will also consider it a feature that if you put a system in a closet and leave it there for five or ten years, it will probably die instead of hanging around as an ancient zombie full of outdated things. The downside of this is for 'industrial' computers that are embedded into larger systems (including in things like hospital machinery, which are infamous for still running their embedded computers with long-obsolete operating systems). Perhaps the hardware vendors will just vastly over-provision the SSDs and then hope for the best.

HDAndSSDLifetimeThought written at 00:13:09; Add Comment

2020-09-28

My likely path away from spinning hard drives on my home desktop

One of my goals for my home desktop is to move entirely to solid state storage. Well, it's a goal for both my home and work machine, and I originally expected to get there first at home, but then work had spare money and suddenly my work machine has been all solid state for some time (which is great except for the bit where I'm not at work to enjoy it).

Moving to all solid state at work was relatively straightforward because all of my old storage on my work machine was relatively small; I had a mirrored pair of 250 GB SSDs, a mirrored pair of 1 TB HDs, and a third 500 GB HD for less important things, and none of them were all too full. This was easily all replaced with a pair of reasonable sized NVMe drives and a pair of 2 TB SSDs, which weren't that expensive even in late 2019. Unfortunately my home machine is better configured; I currently have a mirrored pair of 750 GB SSDs and a mirrored pair of '3 TB' HDs (one of them is a 4 TB HD, but since it's mirrored the extra TB is wasted). The HDs are used for a LVM volume that has only about 1.4 TiB allocated, so in theory I could get away with a pair of 2 TB SSDs as the replacement for these HDs. However, that would leave me relatively short of extra space for things like digital photography (those RAW files add up fast).

The obvious replacement and supplement for my current 750 GB SSDs is a pair of decent 1 TB NVMe drives, which seem to be not too expensive these days. Unfortunately there is not as good a replacement for my pair of 3 TB HDs. While 4 TB SSDs are available, they cost noticeably more per GB than 2 TB SSDs do (as I write this, one large Canadian online retailer lists WD Blue 2 TB SSDs for $304 and the 4 TB version for $709). One option would be to shrug and pay the premium for future proofing things; another would be to buy a pair of 2 TB SSDs and rely on a combination of the extra space on the NVMe drives, reusing my current 750 GB SSDs, and rationalizing space usage when I migrate from my old LVM setup to ZFS on the new SSDs.

A complication is that now is not necessarily the right time to buy new NVMe drives, especially relatively expensive ones. The NVMe world is just starting to move from PCIe 3.0 to PCIe 4.0, which offers various improvements once everything is working. My current home motherboard has no PCIe 4.0 support, of course, but based on past experience I'll be keeping any NVMe drives that I buy now for at least half a decade, which means that they'll likely wind up in a PCIe 4.0 capable system within their lifetime.

(On the one hand, PCIe 4.0 will probably not make a particularly visible performance difference on my home machine on typical or even somewhat atypical tasks, like compiling Firefox from source. On the other hand, I don't like leaving potential performance on the table.)

So despite all of what I've written, I'm probably going to do my usual thing and sit on my hands for a while. Perhaps various end of the year sale prices will get me to finally move forward.

(This is one of the entries that I write partly to try to motivate myself.)

PS: I have a mixed pair of 3TB and 4TB HDs for the usual reason, which is that I used to have a pair of 3 TB HDs and then one of them died and I needed to replace it. My LVM array has migrated up from smaller sizes of HDs over time this way.

(Waiting for a warranty replacement is never an option, because I want my redundancy back much sooner than a replacement would get to me.)

HomePCAllSolidStatePath written at 20:46:56; Add Comment

2020-09-06

Daniel J. Bernstein's IM2000 email proposal is not a good idea

A long time ago, Daniel J. Bernstein wrote a proposal for a new generation of Internet email he called IM2000, although it never went anywhere. Ever since then, a significant number of people have idealized it as the great white 'if only' hope of email (especially as the solution to spam), in much the same way that people idealized Sun's NeWS as the great 'if only' alternative to X11. Unfortunately, IM2000 is not actually a good idea.

The core of IM2000 is summarized by Bernstein as follows:

IM2000 is a project to design a new Internet mail infrastructure around the following concept: Mail storage is the sender's responsibility.

The first problem with this is that it doesn't remove the fundamental problem of email, which is (depending on how you phrase it) that email is an anonymous push protocol or that it lacks revocable authorization to send you things. In IM2000, random strangers on the Internet are still allowed to push to you, they just push less data than they currently do with (E)SMTP mail.

The idea that IM2000 will deal with spam rests on the idea that forcing senders to store mail is difficult for spammers. Even a decade ago this was a questionable assumption, but today it is clearly false. A great deal of serving capacity is yours for the asking (and someone's credit card) in AWS, GCP, Azure, OVH, and any number of other VPS and serverless computing places. In addition many spammers will have a relatively easy time with 'storing' their email, because their spam is already generated from templates and so in IM2000 could be generated on the fly whenever you asked for it from them. We now have a great deal of experience with web servers that generate dynamic content on demand and it's clear that they can run very efficiently and scale very well, provided that they're designed competently.

(I wrote about this a long time ago here, and things have gotten much easier for spammers since then.)

At the same time, IM2000 is catastrophic for your email privacy. People complain vociferously about 'tracking pixels' in HTML email that betray when you open and read the email from someone; well, IM2000 is one giant tracking pixel that reliably reports when and where you read that email message. IM2000 would also be a terrible email reading experience, because it's like a version of IMAP where message retrieval has random delays and sometimes fails entirely.

(As far as spam filtering your incoming IM2000 messages goes, IM2000 gives you far less up front information than you currently get with SMTP email. I wrote up this and other issues a long time ago in an entry about the technical problems of such schemes. Some of those problems are no longer really an issue more than a decade later, but some continue to be.)

At a broader 'technical choices have social impacts' level, IM2000 would create a very different experience than today's email systems if implemented faithfully, one where 'your' email was actually not yours but was mostly other people's because other people are storing it. Those other people can mostly retract individual messages by deleting them from their servers (you would still have the basic headers that are pushed to you), and they can wipe out large sections of your email by deleting entire accounts (and the sent messages associated with them), or even by going out of business or having a data loss incident. Imagine a world where an ISP getting out of the mail business means that all email that its customers have sent from their ISP email accounts over the years just goes away, from everyone's mailbox.

(If 'ISP' sounds abstract here, substitute 'Yahoo'. Or 'GMail'.)

In addition, in some potential realizations of IM2000, email would become mutable in practice (even if you weren't supposed to in theory), because once again the sender is storing the message and is in a position to alter that stored copy. Expect that capability to be used sooner or later, just as people silently revise things posted on the web (including official statements, perhaps especially including them).

Some of these social effects can be partially avoided by storing your own local copies of IM2000 messages when you read them, but there are two issues. The first is pragmatic; the more you store your own copies and the earlier you make them, the more IM2000 is SMTP in a bad disguise. The second is social; in the IM2000 world the server holds the authoritative copy of the message, not you, so if you say the message says one thing (based on your local copy) and the server operator says it says something else (or doesn't exist), the server operator likely wins unless you have very strong evidence.

In general, I think that IM2000 or anything like it would create an 'email' experience that was far more like the web, complete with the experience of link rot and cool messages changing, than today's email (where for better or worse you keep your own full record of what you received, read and reread it at your leisure, and know that it's as immutable as you want it to be). And it would still have the problem that people can push stuff in front of you, unlike the web where you usually at least have to go looking for things.

IM2000NotGoodIdea written at 00:39:01; Add Comment

2020-08-31

Why we won't like it if signing email is the solution to various email problems

Yesterday I wrote about my thesis that all forms of signing email are generally solving the wrong problem and said in passing that if signing email was actually a solution, we wouldn't like it in the long run. Today, let's talk about that.

As I sort of discussed yesterday, the issue with signing email as a solution is that on the Internet, identities normally can't be used to exclude people because people can always get a new one (eg, a new domain and new DKIM keys for it and so on). If signed email is going to solve problems, the requirement is that such new identities stop being useful. In other words, email providers would stop accepting email from new identities (or at least do something akin to that). If new identities don't get your email accepted, existing identities are suddenly important and can be used to revoke access.

(This revocation might be general or specific, where a user could say 'I don't want to see this place's email any more' and then the system uses the identity information to make that reliable.)

Let's be blunt: big email providers would love this. Google would be quite happy in a world where almost everyone used one of a few sources of email and Google could make deals or strongarm most or all of them. Such a world would significantly strengthen the current large incumbents and drive more business to their paid offerings. Even the current world where it's rather easier in practice to get your email delivered reliably if you're a Google Mail or Microsoft Office365 customer does that; a world where only a few identities had their email reliably accepted would make that far worse.

For the rest of us, that would be a pretty disastrous change. I won't say that the cure would be worse than the disease (people's opinions here vary), but it would likely create two relatively separate email worlds, with the remaining decentralized email network not really connected to the centralized one of 'only known identities accepted here' email. If running your own mail server infrastructure meant not talking to GMail, a lot of people and organizations would drop out of doing it and the remaining ones would likely have ideological reasons for continuing to do so.

(A far out version of this would be for it to lead to multiple federated email networks, as clusters of email systems that interact with each other but don't accept much email from the outside world effectively close their borders much as the big providers did. If this sounds strange, well, there are multiple IRC networks and even the Fediverse is splintering in practice as not everyone talks to everyone else. And there are plenty of messaging systems that don't interconnect with each other at all.)

PS: There are lesser versions of this, where large email providers don't outright stop showing 'outside' email to people but they do downgrade and segregate it. And of course that happens to some degree today through opaque anti-spam and anti-junk systems; if Hotmail dislikes your email but not enough to reject it outright, probably a lot of people there aren't going to see it.

SignedEmailSolutionImpact written at 22:18:53; Add Comment

2020-08-30

All forms of signing email are generally solving the wrong problem (a thesis)

Modern email is full of forms of signed email. Personally signed email is the old fashioned approach (and wrong), but modern email on the Internet is laced with things like DKIM, which have the sending system sign it to identify at least who sent it. Unfortunately, the more I think about it, the more I feel that signed email is generally solving the wrong problem (and if it's solving the right one, we won't like that solution in the long run).

A while ago I wrote about why email often isn't as good as modern protocols, which is because it's what I described as an anonymous push protocol. An anonymous push protocol necessarily enables spam since it allows anyone to send you things. Describing email as 'anonymous push' makes it sound like the anonymity is the problem, which would make various forms of signing the solution (including DKIM). But this isn't really what you care about with email and requiring email to carry some strong identification doesn't solve the problem, as we've found out with all of the spam email that has perfectly good DKIM signatures for some random new domain.

(This is a version of the two sides of identity. On the Internet people can trivially have multiple identities, so while an identity is useful to only let selected people in, it's not useful to keep someone out.)

I think that what you really care about with modern communication protocols is revocable authorization. With a pull protocol, you have this directly; you tacitly revoke authorization by stopping pulling from the place you no longer like. With a push protocol, you can still require authorization that you grant, which lets you revoke that granted authorization if you wish. The closest email comes to this is having lots of customized email addresses and carefully using a different one for each service (which Apple has recently automated for iOS people).

Obviously, requiring authorization to push things to you has a fundamental conflict with any system that's designed to let arbitrary strangers contact you without prearrangement (which is the fundamental problem of spam). Modern protocols seem to deal with this in two ways (even with revocable authorization); they have some form of gatekeeping (in the form of accounts or access), and then they evolve to provide settings that let you stop or minimize the ability of arbitrary strangers to contact you (for example, Twitter's settings around who can send you Direct Messages).

(The modern user experience of things like Twitter has also evolved to somewhat minimize the impact of strangers trying to contact you; for example, the Twitter website separates new DMs from strangers from DMs from people you've already interacted with. It's possible that email clients could learn some lessons from this, for example by splitting your inbox into 'people and places you've interacted with before' and 'new contacts from strange people'. This would make DKIM signatures and other email source identification useful, apart from the bit where senders today feel free to keep changing where they're sending from.)

PS: In this view, actions like blocking or muting people on Twitter (or the social network of your choice) is a form of revoking their tacit authorization to push things to you.

SignedEmailWrongProblem written at 22:55:33; Add Comment

2020-08-25

My home desktop is still locking up when it gets too cold (and what next)

In early 2019 I wrote about the mystery of my home desktop that was locking up when it got too cold. At that point the machine was about a year old (I built it in March or so of 2018) and the winter of 2018-2019 was its first winter and thus my first chance to see this. I regret to report that I haven't really done anything since then, and the machine will still lock up when it gets too cold. For last winter (the 2019-2020 winter), my workaround was to raise the heat here; in combination with a generally mild winter that was enough to have only a couple of lockups during especially cold overnight times.

Current world and local events suggest strongly that daytime interior temperatures will not be an issue this coming winter, because I will almost certainly be working from home almost all of the time and so will want it warm enough to be comfortable (which is well above the temperature the machine locks up at). However that still leaves me with the direct issue of overnight temperatures and the indirect issue that I have a machine with some sort of flaw that's now my primary machine for doing work.

The path of least resistance is to do nothing and assume that nothing really bad will happen. My machine will probably lock up a few times overnight when I'm not using it over the winter, but that's no big deal. The path of more effort and some risk is to reseat at least the memory, loosen and re-tighten the motherboard screws, and perhaps experiment with canned air to selectively cool spots on the motherboard to see if I can identify something that triggers the problem. This risks a slightly flaky component into a very flaky or even a dead component, which would leave me with a dead machine, and also has no guarantee of fixing or even identifying the problem.

(But re-seating things should be very low risk so I should really try it, however much I don't like working with hardware.)

The sure but more expensive path would be to replace at least the motherboard and (probably) the power supply. Buying a new motherboard or PSU would be necessary in practice even if I identify a fault in my current one and get it replaced under warranty, because I'm not going to be without a home desktop for so much as a day if I can help it. This feels wasteful (the current hardware is only about two and a half years old) and expensive, but if I put a reasonable value on my time and annoyance it's probably the second cheapest option after doing nothing. It also means I would have to figure out at least a new motherboard, which is where I started thinking about how I want a type of PC and motherboard that's generally skipped. However, it would give me an emergency spare motherboard and PSU that would be comparable to my current machine, which is something I might decide I care about in the current conditions.

(Having a reliable motherboard with two M.2 slots and a backup emergency spare would also make it less scary to upgrade to M.2 NVMe drives. Right now I've been holding back on that partly because my emergency machine is my old home PC, which has no M.2 slots and I lost full trust in.)

(This entry is one of the ones that I write in part to convince myself to do something sensible. Whether I actually will is an open question; knowing myself, the most likely option is to do nothing until the weather starts getting cold enough that the issue's more imminent.)

ColdLockupMachineMysteryII written at 23:51:37; Add Comment

2020-08-24

I want a type of desktop PC (and motherboard) that's generally skipped

By now, the desktop x86 PC market has segmented itself into a number of categories. There are machines, CPUs, and motherboards that are basic machines with limited features and made to be quite inexpensive, 'business' machines that don't need very much but are more than the very basics, machines for gaming enthusiasts, and HEDT ('High End Desktop') workstations that are aimed at people building high powered machines. Typical examples of what are in these categories are in Anandtech's Best CPUs for Workstations and Best CPUs for Gaming; in Intel motherboard chipsets there is the H series, the B series, and the Z series (sometimes among others). Unfortunately for me, my interests in machines fall into an intermediate category that doesn't generally exist, which I will call a sysadmin workstation.

Every system administrator probably has a somewhat different view of what they want in their desktop. My image of a sysadmin workstation is exemplified by my current home machine; it has a fast (Intel) CPU, it takes a fair amount of RAM that can run at faster than completely stock speeds, it has at least M.2 slots (both of which run at x4) and four SATA ports, and it can drive at least one 4K display through onboard graphics. Unlike a gaming machine, I want to use integrated graphics (they're quieter, less clutter, cheaper, and generally better supported on Linux) instead of a GPU. Unlike a HEDT workstation, I don't want a fire-breathing CPU with its increased cost and cooling requirements (and I also don't want one or more GPUs for GPU computation). And I want more storage (especially M.2) than basic home or business desktops usually provide.

(I might change my views on GPUs if Intel starts making discreet GPUs that are well supported under Linux, drive two 4K displays at 60 Hz or better, and don't require lots of cooling.)

It's possible to put together a sysadmin workstation, of course; I did it for my current home machine and my current (AMD based) work machine, although the latter has to use a GPU. But it generally involves buying more than you need and picking through specifications to narrow in on the bits you care about, and the motherboard support for integrated graphics is often somewhat limited. People who buy motherboards with lots of features and high specification generally use GPUs, so there are a fair number of otherwise suitable motherboards that just don't support onboard graphics. I'm also lucky in that Intel still provides versions of their higher end desktop CPUs with onboard graphics. AMD has historically restricted onboard graphics to lower end models; if you wanted a reasonably powerful Ryzen, you were stuck getting a GPU.

(As far as Intel versus Ryzen goes, I still don't trust AMD. My Intel home machine still has its problem of hanging when the temperature drops too low, but that's a narrow issue that's probably a motherboard fault. The coming of winter with another go-around of this issue is one reason I'm thinking about motherboards and desktops and so on again.)

MissingPCType written at 23:29:01; Add Comment

2020-08-21

When I stopped believing in Google's fundamental good nature

Once upon a time I might have believed in Google's fundamental goodness and well intentioned nature (probably with qualifications). Google themselves eventually taught me better, perhaps later than it took for other people to realize that they were an amoral corporation. For me, the moment of realization, the point where I knew for sure that Google's "don't be evil" slogan was inoperative, was the great Google+ 'nymwars', where Google (for Google+) declared that everyone on Google+ must use their real name and then attempted to enforce that (it went wrong pretty fast).

There were a large number of problems with Google+'s 'real name' policies. It didn't match how actual users referred to each other and were known online, including for people who actually worked at Google. Forcing people to reveal their real name does real harm and has real risks (something appreciated even back then in 2011, but which is more pointed today). And in practice, a 'real name' policy is actually a 'it looks like a real name to underpaid support people or some automated system' policy, where 'John Smith' is far more likely to be accepted than a non-Western name or an unusual one, even if one is not your real name and the other is.

Google knew all of this. People, including internal people, pointed this out to them at great length. A decent number of technical people who worked at Google protested. There were demonstrated problems with the actual enforcement and actions involved. And Google, in both their senior leadership and their ongoing policies, simply didn't care. All of the harms and the wrongs did not matter to them. They were going to do evil because they could, and because they thought it served their corporate goals for Google+.

(We all know how that one went; Google+ died, for all that it had some good ideas.)

Watching all of this happen, watching all of the protesting and good arguments and everything go exactly nowhere, is when I knew that my image of Google was wrong (and gone). Now I extend no more trust to Google than I think supported by their corporate and commercial interests. Google employees may care about "don't be evil" and doing the right thing and so on, but Google as a whole does not, and the employees do what Google tells them to.

(This elaborates something I said in an aside long ago, in an entry about why my smartphone is an iPhone.)

GoogleWhenEvilRealized written at 22:07:49; Add Comment

2020-08-12

People often have multiple social identities even in the physical realm

Somewhat recently, I read The Future of Online Identity is Decentralized (via), and it said one thing in passing that made me twitch. I'll quote rather than paraphrase:

Authenticity and anonymity aren't mutually exclusive and that is the beauty of the internet. In the physical realm, you are (mostly) limited to a single social identity. In the digital space, there are no such restrictions. While you can't embody multiple persons in the offline world, you can have several identities online. [...]

This is, in practice, not the case. Many people have what are in effect multiple social identities in the real world, and you can even argue that the lack of support for this in common platforms on the Internet has created some real problems (especially for how people interact with them).

The way you naturally create multiple social identities in the real world is simple; you don't tell everyone you interact with about everything you do, especially in detail. You are in practice one person at work, another person at home, a third person at your bike club, a fourth person on the photowalks you do (or did) with the group of regulars, and so on and so forth. These disjoint groups of people may have some idea that you have other identities (you may mention to your co-workers that you're a keen bicyclist and are in a bike club), but they probably don't know the details (and often they don't want to). In practice these are different social identities and you're a different person to all of these groups; each one may well know some things about you that would surprise others who know you.

(My impression is that this separation is especially strong between work and everything else. People like to draw a line here and not share back and forth.)

By now, we've all heard stories of these separate social identities breaking down (or being exposed) on the Internet in social media, in the familiar story of 'I had no idea they were <X>' (or 'believed in <X>'), where <X> is often something uncomfortable to you. Before Facebook, Twitter, and the like, this sort of thing required different groups of people to talk to each other or have an unexpected connection (say, one of your co-workers takes up bicycling and joins your bicycle club). Now, social media often slams all of that together; if you see anything of someone, you may see everything. Social media generally tacitly encourages this by making it easiest to share everything with everyone, instead of providing good support for multiple social identities on a single platform (leading to the perennial 'I followed you to read about <X>, not <Y>' complaints on Twitter and elsewhere).

(You can also argue that the Internet makes it easier for people who want to cross connect your (online) identities to do so, because it made broad searches much easier. On the Internet, you have to be deliberately anonymous or simple web searches may well turn up multiple social identities.)

Relatively strong Internet anonymity is probably easier than strong physical anonymity, at least today (where you can take someone's name you learned from one connection to them and start trying to find other signs of them on the Internet). Physical social identities necessarily leak what you look like and often your name, and you can readily skip both on most of the Internet.

(Some portions of the Internet are very intent on knowing your real name, but there's still a broad norm that people can be anonymous and pseudonymous. And if you have a relatively common name, even your name is relatively pseudonymous by itself, because there will be many people on the Internet with that name.)

ManyRealLifeIdentities written at 22:24:28; Add Comment

(Previous 10 or go back to July 2020 at 2020/07/28)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.