Wandering Thoughts archives

2009-01-31

Why social mudding works

Recently (for my version of 'recently'), R. Francis Smith wrote The Right Social Networking Model Is From 1989, in praise of social muds (and one in particular, DinoMUSH) as a model for social networking in general. As it happens I have a certain amount of experience with hanging out on DinoMUSH myself, and so I think that there are some specific reasons that it works so well as a social network, reasons that may make its virtues and approaches harder to apply to social networking in general.

(By 'a certain amount of experience' I mean that I have been hanging out on DinoMUSH and its predecessors for, well, not as long as R. Francis Smith, but almost as long as that.)

Thus, some of the reasons why DinoMUSH works:

  • the DinoMUSH social group is in constant contact, as most of the remaining population logs in quite often. I don't think that it would work as well if most people only logged in relatively infrequently; there would be too much catching up going on.

  • because hanging out on DinoMUSH is seen as ongoing conversations, people are tolerant of the the repetition that's necessary to (re-)establish the context for people as they pop in. Much like a real conversation, it's acceptable to have to repeat to the latest arrival what's going on and the latest important news.

    (Russ mentioned the MRD, which helps to reduce the annoyance of this; to a reasonable extent you can bring yourself up to speed without having to bother anyone else.)

  • there are persistent out-of-conversation information sources, some of them even on DinoMUSH itself. Complex updates often get put there, and in the conversation people just point you at them in order to pass on the information.

    (Twitter's deliberate limitations encourage this sort of usage, but most everything else seems to want to be all inclusive.)

The final, slightly more cynical observation is that most of the users of any long-standing social environment are going to be exactly the users that the features of the environment click for. Thus, I am somewhat wary of generalizing that DinoMUSH's features are generally desirable; while I certainly like them, there may be more than one reason that social MUDding is a very small subset of online social networking.

WhyDinoWorks written at 23:20:23; Add Comment

2009-01-26

How LiveJournal is sticky

Given the recent commotion over some of LiveJournal's business decisions and people's unhappyness with LiveJournal as a result, I've been thinking about the various ways that LiveJournal makes itself sticky (and thus hard to leave, and so on). I think that there are three general levels of LiveJournal stickyness, in ascending order:

  • the simplicity and ease of use for individual users.

  • the network effects: the more that people that you want to read use LiveJournal, the more attractive it is to be there and read them in one spot.

    (At least in the beginning this combined strongly with the ease of use issue and I think that it continues to some degree even today, despite things like Google Reader.)

  • what I will call social stickiness, which are all of the LiveJournal features that enable communities to spring up; these are things like comments with attached user identities and private (locked) journal entries.

    (Actual LiveJournal communities are useful, but I think not as important as the core social-enabling features that let you associate with identifiable people.)

I think that a lot of discussion about how people can (or can't) just migrate away from LiveJournal miss the incredible stickiness of the last level. Given that people are social, enabling that sociability is a very serious attraction.

These days, blogs can be user friendly (and there have always been places that let you easily create a blog of your own) and syndication feed readers and aggregators can lower the network effect, but I don't think that there's anything that can substitute for the third level. It's hard to see how there could be in the near future, because there are a horde of hard problems to be solved (starting with automatic cross-site identities that work through your syndication feed reader).

(Communities form even without these enabling features; I would be remiss if I didn't mention the anime blogging community as one example. But I think that the social features that LiveJournal has make it much easier to have communities form and stay, and to make it so that new people can more easily get pulled into them.)

LiveJournalStickyness written at 01:36:25; Add Comment

2009-01-10

You cannot ask users to manage their own security

I've been dancing around this issue recently, but it's time to come out and say it explicitly: if you want things to actually be secure, you cannot ask users to manage their own security.

In practice, users are not interested in security (well, not much) and are not going to do it, and the rare ones that are interested and do care almost certainly don't know enough to make sensible choices. What you get if you make users manage their own security is more or less what you'd get if you made home owners do their own electrical work: quite a few houses would burn down and many more would have horrifying electrical wiring that would provide fodder for home renovators for years.

So, to continue the analogy, if you want the houses not to burn down either the houses have to be pre-wired correctly or there has to be a skilled electrician around to handle the wiring work. Since most users are on their own, the systems we build for them shouldn't need any management to be secure; they need to start out secure and stay that way by default, without users having to make the right decisions.

(This doesn't mean that you shouldn't offer users options; down that road lies Firefix 3's approach to SSL or worse. And hopefully it goes without saying that your systems need to work well to start with, as security that gets in the way is in practice no security at all.)

WhoManagesSecurity written at 03:36:00; Add Comment

2009-01-06

The problem of forcing users to make choices (in security)

Sometimes, faced with an apparently intractable problem in security, people say 'I know, we'll leave it up to the users to decide what to do'. What they really mean is 'we don't know what the right thing to do is, so we're going to make the users figure it out'. As the saying goes, now they have two problems.

Let's look at some big reasons that this is a terrible idea, both in general and specifically for security systems.

First, the users don't actually care about the issue, or indeed anything besides getting done what they want to, so they are just going to make a choice that makes things work and get out of their way. Since this choice is at best random (and at worst whatever is easiest), it is unlikely to be the right choice. Whatever that is.

(It doesn't help that security is a pain, and that people usually resent being forced to make choices and answer questions that they have no idea about. I suspect that this gets worse if you try to scare them about how these choices are important for security, because then you make them nervous when they don't know the right answer or even how to choose.)

Second, even if a user does care, they are unlikely to be able to make the right choice because only experts are competent to make complex choices. Unless the user is already an expert, they don't have the knowledge to figure out the choice; even with the best intentions in the world, the odds of them picking the right choice are probably not much better than chance. The less knowledge a user has, the less well they can make choices even with good intentions. Even a relatively knowledgeable user can be stymied by a complex situation, and a user that almost entirely lacks the background can get lost even in what the designers think of as 'simple' situations.

(Designers, being very close to the system, often underestimate all aspects of this: how complex the question is, how much knowledge users will have, and how much they will care.)

This is bad enough for ordinary software. For security systems, it can be catastrophic; if you are asking questions where the answers affect the system's security, pushing these choices off on to the users basically guarantees that your security system is in fact completely pointless, since there's not likely to be much security left.

It is my personal belief that those critical for security questions are exactly the questions most likely to be pushed off to the users, because they are the ones for which picking a default answer (and letting users change it if they need to) does not work. For example, the choice between two equally strong cipher algorithms is mostly arbitrary, so one can pick a default, while the choice of who to trust is not arbitrary and often has no default (at least not in the world of mathematical security, which generally demands strong correctness if you are going to decide things for people).

(Disclaimer: these are not novel ideas; I just want to write them down somewhere that I can find for later use. See eg Informed choices and real security, which I just found by Googling for likely discussion terms. Note that the first reason implies that a lot of people are simply not interested in becoming informed about the questions you're asking, even if you make it possible.)

SecurityChoiceProblem written at 01:09:18; Add Comment

2009-01-01

Flaws in the 'web of trust' approach to trust issues

One alternative to monolithic certificate authorities is a web of trust approach to trust issues. Here's my view of the flaws this model as an alternative to CAs, when used in practice.

First, you haven't actually removed the need to pick trust roots; every user has to start their web somewhere, and usually they are going to start from some well-known root or roots. What you have really done is made trust roots less subject to detailed scrutiny and criticism, and probably made it less obvious to people who they should start out trusting.

Next, a web of trust is only as strong as its weakest link, and there are a lot of links, which means a lot of places for the overall web to be weak and thus to let attackers in. The usual answer is revocation, but threats of revocation are subject to being gamed by attackers, for example by the attacker doing their best to have a bunch of valid nodes dependent on their certification in addition to the harmful nodes. Revocation also assumes that you can reliably identify the true start of the rogue nodes, which I think is optimistic; there is a lot that an attacker can do to cloak how far up the web of trust the rot truly goes.

There are more sophisticated schemes that try to work around the second issue (requiring more endorsements for more trust, see trust metrics), but I believe that it's been demonstrated that sufficiently determined attackers can eventually game them too.

Sidebar: I don't think trust is even transitive

I also think that there is a strong argument that trust is simply not transitive in the way that a 'web of trust' requires it to be. On a concrete basis, there are at least three sorts of trusts involved in a web of trust:

  • I trust that you are Joe.
  • I trust that you are making sure of the identities of people that you are trusting.
  • I trust you to make sure that other people are verifying the identities of people that they are trusting.

These trusts (and their further recursion) are entirely different things and cannot be bundled together. They are also increasingly hard to verify (to the point where I think that most schemes only really verify the first trust and wave their hands about everything else).

WebOfTrustFlaws written at 23:58:25; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.