2016-08-01
My new key binding hack for xcape and dmenu
When I started using dmenu, I set up my window manager to start it when I hit F5. F5 was a conveniently reachable key on the keyboard that I was using at the time and I was already binding F2 and F3 (for 'lower window' and 'raise window'). This worked pretty well, but there was a slight inconvenience in that various programs use F5 for their 'refresh this' operation and my window manager key binding thus stole F5 from them. Web browsers have standardized on this, but more relevant for me is that kdiff3 also uses it to refresh the diff it displays.
(For various reasons kdiff3 is my current visual diff program. There's also meld, but for some reason I never took to it the way I've wound up with kdiff3. There are other alternatives that I know even less about.)
Later on I got a new keyboard that made F5 less conveniently accessible. Luckily, shortly afterwards evaryont turned me on to xcape and I promptly put it to work to make tapping the CapsLock key into an alias for F5, so that I could call up dmenu using a really convenient key. Fast forward to today and the obvious has happened; I don't use the actual F5 key to call up dmenu any more, I do it entirely by tapping CapsLock, because that's much nicer.
Xcape doesn't directly invoke programs; instead it turns tapping a key (here, the CapsLock) into the press of some other key (here, F5). It's up to you to make the other key do something interesting. Recently, the obvious struck me. If all I'm using the F5 key binding for is as a hook for xcape, I don't actually have to bind F5 in fvwm; I can bind any arbitrary key and then have xcape generate that key. In particular, I can bind a key that I'm not using and thereby free up F5 so regular programs can use it.
If I was clever and energetic, I would trawl through the X key database to find some key that X knows about but that doesn't actually exist on my keyboard. I'm not that energetic right now, so I went for a simpler hack; instead of binding plain F5, I've now bound Shift Control Alt F5. This theoretically could collide with what some program wants to do with F5, but it's extremely unlikely and it frees up plain F5.
(It also has the potentially useful side effect that I can still call up
dmenu if the xcape process is dead. If I bound dmenu to a completely
inaccessible key, that wouldn't be possible. Of course the real answer
is that if xcape dies I'm just going to restart it, and if this isn't
possible for some reason I'm going to dynamically rebind dmenu to an
accessible key by using fvwm's console.)
2016-07-31
I've become mostly indifferent to what language something is written in
In a comment on this entry, Opk wrote (in part):
Will be interesting to hear what you make of git-series. I saw the announcement and somewhat lost interest when I saw that it needed rust to build.
I absolutely get this reaction to git-series being written in Rust, and to some extent I share it. I have my language biases and certainly a program being written in some languages will make me turn my nose up at it even if it sounds attractive, and in the past I was generally strongly dubious about things written in new or strange languages. However, these days I've mostly given up (and given in) on this, in large part because I've become lazy.
What I really care about these days is how much of a hassle a program is going to be to deal with. It's nice if a program is written in a language that I like, or at least one that I'm willing to look through to figure things out, but it's much more important that the program not be a pain in the rear to install and to operate. And these days, many language environments have become quite good at not being pains in the rear.
(The best case is when everything is already packaged for my OSes.
Next best is when at least the basic language stuff is packaged and
everything else has nice command line tools and can be used as a
non-privileged user, so 'go get ...' and the like just work and
don't demand to spray things all over system directories. The worst
case is manual installation of things and things that absolutely
demand to be installed in system directories; those get shown the
exit right away.)
In short, I will (in theory) accept programs written in quite a lot
of languages if all I have to do to deal with them is the equivalent
of '<cmd> install whatever', maybe with a prequel of a simple
install of the base language. I don't entirely enjoy having a
$HOME/.<some-dir> populated by a pile of Python or Ruby or Perl
or Rust or whatever artifacts that were dragged on to my system in
order to make this magically work, but these days 'out of sight, out
of mind'.
There are language environments that remain hassles and I'm unlikely to touch; the JVM is my poster child here. Languages that I've never had to deal with before add at least the disincentive of uncertainty; if I try out their package system and so on, will it work or will it blow up in my face and waste my time? As a result, although I'm theoretically willing to consider something written in node.js or Haskell or the like, in practice I don't think I've checked out any such programs. Someday something will sound sufficiently attractive to overcome my biases, but not today.
(As mentioned, I generally don't care if the program is available
as a prebuilt package for my OS, because at that point there's
almost no hassle; I just do 'apt-get install' or 'dnf install'
and it's done. The one stumbling block is if I do 'dnf install'
and suddenly three pages of dependent packages show up. That can
make me decide I don't want to check out your program that badly.)
In the specific case of git-series and Rust, Rust by itself is not quite in this 'no hassle' zone just yet, at least on Fedora; if I had nothing else that wanted Rust, I probably would have ruled out actively looking into git-series as a result. But I'd already eaten the cost of building Rust and getting it working in order to keep being able to build Firefox, and thus at the start of the whole experience adding Cargo so I'd be able to use git-series seemed simple enough.
(Also, I can see the writing on the wall. People are going to keep on writing more and more interesting things in Rust, so sooner or later I was going to give in. It was just a question of when. I could have waited until Rust and Cargo made it into Fedora, but in practice I'm impatient and I sometimes enjoy fiddling around with this sort of thing.)
This casual indifference to the programming language things are using sort of offends my remaining purist insticts, but I'm a pragmatist these days. Laziness has trumped pickiness.
(Not) changing the stop timeout for systemd login session scopes
I wrote earlier about an irritating systemd reboot behavior, where systemd may twiddle its thumbs for
a minute and a half before killing some still-running user processes
and actually rebooting your machine. In that entry I suggested that
changing DefaultTimeoutStopSec in /etc/systemd/user.conf (or in
a file in user.conf.d) would fix this, and then reversed myself
in an update. That still leaves us with the question: how do you
change this for user scopes?
The answer is that you basically can't. As far as I can tell, there
is no way in systemd to change TimeoutStopSec for just user scopes
(or just some user scopes). You can set DefaultTimeoutStopSec in
/etc/systemd/system.conf (or in a file in system.conf.d), but
then it affects everything instead of just user scopes.
(It's possible that you're okay with this; you may be of the view that when you say 'shut the machine down', you want the machine down within a relatively short period even if some daemons are being balky. And aggressively killing such balky processes is certainly traditional Unix shutdown behavior; systemd is being really quite laid back here. This is an appealing argument, but I haven't had the courage to put it into practice on my machines.)
As you can see in systemctl status's little CGroup layout chart,
your user processes are in the following inheritance tree:
user.slice -> user-NNN.slice -> session-N.scope
You can create systemd override files for user.slice or even
user-NNN.slice, but they can't contain either DefaultTimeoutStopSec
or TimeoutStopSec directives. The former must go in a [Manager]
section and the latter must go in a [Service] one, and neither
section is accepted in slice units. There is a 'user@NNN.service'
that is set up as part of logging in, and since it's a .service
unit you can set a TimeoutStopSec in systemd override files for
it (either for all users or for a specific UID), but it isn't used
for very much and what you set for it doesn't affect your session
scope, which is where we need to get the timeout value set.
If you want to use a really blunt hammer you can set KillUserProcesses
in /etc/systemd/logind.conf (or in a logind.conf.d file).
However, this has one definite and one probable drawback. The
definite drawback is that this kneecaps screen, tmux, and any
other way of keeping processes running when you're not logged in,
unless you always remember to use the workarounds (and you realize
that you need them in each particular circumstance).
(I don't know what it does to things like web server CGIs run as
you via suexec or processes started under your UID as part of
delivering mail. Probably all of those processes count as part of
the relevant service.)
The probable drawback is that I suspect systemd does this process killing in the same way it does it for reboots, which means that the default 90 second timeout applies. So if you're logging in with processes that will linger, you log out, and then you immediately try to reboot the machine, you're still going to be waiting almost the entire timeout interval.
2016-07-30
The perils of having an ancient $HOME (a yak shaving story)
I build Firefox from source for obscure reasons, and for equally obscure reasons I do this from the Firefox development repo instead of, say, the source for the current release. Mozilla has recently been moving towards making Rust mandatory for building Firefox (see especially the comments). If you build the Mercurial tip from source this is currently optional, but I can see the writing on the wall so I decided to get a copy of Rust and turn this option on. If nothing else, I could file bugs for any problems I ran into.
Getting a copy of Rust went reasonably well (although Rust compiles
itself painfully slowly) and I could now build the Rust component
without issues. Well, mostly without issues. During part of building
Firefox, rustc (the Rust compiler) would print out:
[...]
librul.a
note: link against the following native artifacts when linking against this static library
note: the order and any duplication can be significant on some platforms, and so may need to be preserved
note: library: dl
note: library: pthread
note: library: gcc_s note: library: c
note: library: m
note: library: rt
note: library: util
libxul_s.a.desc
libxul.so
Exporting extension to source/test/addons/author-email.xpi.
[...]
Everything from the first 'note:' onwards was in bold, including
later messages (such as 'Exporting ...') that were produced by other
portions of the build process. Clearly rustc was somehow forgetting
to tell the terminal to turn off boldface, in much the same way that
people writing HTML sometimes forget the '</b>' in their markup and
turn the rest of the page bold.
This only happened in xterm; in gnome-terminal and other things, rustc turned off bold without problems. This didn't surprise me, because almost no one still uses and thus tests with xterm these days. Clearly there was some obscure detail about escape sequence handling that xterm was doing differently from gnome-terminal and other modern terminal emulators, and this was tripping up rustc and causing it to fail to close off boldface.
(After capturing output with script, I determined that rustc was
both turning on bold and trying to turn it off again with the same
sequence, 'ESC [ 1 m'. Oh, I said to myself, this has clearly
turned into a toggle in modern implementations, but xterm has
stuck grimly to the old ways and is making you do it properly. This
is the unwonted, hubristic arrogance of the old Unix hand speaking,.)
Since life is short and my patience is limited, I dealt with this
simply; I wrote a cover script for rustc that manually turned off
bold (and in fact all special terminal attributes) afterwards. It
was roughly:
#!/bin/sh rustc.real "$@" st=$?; tput sgr0; exit $st
This worked fine. I could still build Firefox with the Rust bits enabled, and my xterm no longer became a sea of boldface afterwards.
Today I ran across an announcement of git-series, an interesting looking git tool to track the evolution of a series of patches over time as they get rebased and so on. Since this is roughly how I use git to carry my own patches on top of upstream repos, I decided I was interested enough to take a look. Git-series is written in Rust, which is fine because I already had that set up, but it also requires Rust's package manager Cargo. Cargo doesn't come with Rust's source distribution; you have to get it and build it separately. Cargo is of course built in Rust. So I cloned the Cargo repo and started building.
It didn't go well.
In fact it blew up spectacularly and mysteriously right at the
start. Alex Crichton of the
Rust project took a look at my strace output and reported helpfully
that something seemed to be adding a stray ESC byte to rustc's
output stream when the build process ran it and tried to parse the
output.
Oh. Well. That would be the tput from my cover script, wouldn't
it. I was running tput even when standard output wasn't a terminal
and this was mucking things up for anything that tried to consume
rustc output. That fixed my issue with Cargo, but now I wanted
to get this whole 'doesn't turn off boldface right' issue in Rust
fixed, so I started digging to at least characterize things so I
could file a bug report.
In the process of poking and prodding at this, a little bit of my
strace output started nagging at me; there had been references
in it to /u/cks/.terminfo. I started to wonder if I had something
outdated in my personal environment that was confusing Rust's
terminfo support library into generating the wrong output. Since
I couldn't remember anything important I had set up there, I renamed
it to .terminfo-hold and re-tested.
Magically, everything now worked fine. Rustc was happy with life.
What did I have in .terminfo? Well, here:
; ls -l /u/cks/.terminfo-hold/x/xterm -rw-r--r--. 2 cks cks 1353 Feb 10 1998 .terminfo-hold/x/xterm
Here in 2016, I have no idea why I needed a personal xterm terminfo
file in 1998, or what's in it. But it's from 1998, so I'm pretty
confident it's out of date. If it was doing something that I'm going
to miss, I should probably recreate it from a modern xterm terminfo
entry.
(This also explains why gnome-terminal had no problems; it normally
has a $TERM value of xterm-256color, and I didn't have a custom
terminfo file for that.)
My $HOME is really old by now, and as a result it has all sorts
of ancient things lingering on in its dotfile depths. Some of them
are from stuff I configured a long time ago and have forgotten
since, and some of them are just the random traces from ancient
programs that I haven't used for years. Clearly this has its
hazards.
(Despite this experience I have no more desire to start over from scratch than I did before. My environment works and by now I'm very accustomed to it.)
2016-07-29
My surprise problem with considering a new PC: actually building it
Earlier this week I had a real scare with my home machine, where I woke up to find it shut off and staying that way (amidst a distinct odor of burnt electronics). Fortunately this turned out not to be a dead power supply or motherboard but instead a near miss where a power connector had shorted out dramatically; once I got that dealt with, the machine powered on and hasn't had problems since. Still, it got me thinking.
Unlike many people, I don't have a collection of laptops, secondary machines, and older hardware that I can press into service in an emergency; my current home machine is pretty much it. And it's coming up on five years old. On the one hand, I already decided I didn't really want to replace it just now (and also); while I had some upgrade thoughts, they're much more modest. On the other hand, all of a sudden I would like to have a real, viable alternative if my home machine suffers another hardware failure, and buying a new current machine no longer feels quite so crazy in light of this.
So I've been thinking a bit about getting a new PC, which has opened up the surprising issue of where I'd get it from. I'm never been someone to buy stock pre-built machines (whether from big vendors like Dell or just the white box builds from small stores), but at the same time I've never built a machine myself; all of my previous machines have been assembled from a parts list by local PC stores. Local PC stores which seem to have now all evaporated, rather to my surprise.
(There used to be a whole collection of little PC stores around the university that sold parts and put machines together. Over the past few years they seem to have all quietly closed up shop, or at least relocated to somewhere else. I suspect that one reason is that probably a lot fewer students are buying desktops these days.)
One logical solution is to take a deep breath and just assemble the machine myself. I know (or at least read) plenty of people who do this and don't particularly have problems; in fact I'm probably unusual in being into computers yet never having done this rite of passage myself. I've also heard that modern PCs are really fairly easy for the hobbyist to assemble (especially if you stay away from things like liquid cooling). However, I don't really like dealing with hardware all that much, plus you don't get to restore hardware from backups if you screw it up. Spending a few hours nervously screwing things together is not really my idea of fun.
(And having someone else sell me a preassembled machine means that they're on the hook for dealing with any DOA parts, however unlikely that may be with modern hardware.)
There are probably still places around Toronto that do built to order PCs like this. But 'around Toronto' is a big area, plus another advantage of dealing with stores around the university was that we could tap local expertise to find out who did a good job of it and who you kind of wanted to avoid.
If I was in the US, another option would be to order a prebuilt machine from a company that specializes in Linux hardware and has something with suitable specifications. I'm not particularly attached to having fine control over the parts list; I just want a good quality machine that will run Linux well and has enough drive bays. I'm not sure there's anyone doing this in Canada, though, and I certainly don't want to ship across the border. (Just shipping within Canada is enough of a hassle.)
Although part of me wants to take the plunge into assembling my own machine from parts, what I'm probably going to do to start with is ask around the university to see if people have places they like for this sort of thing. My impression is that custom built PCs are much less popular than they used to be (my co-workers just got Dell desktops in our most recent sysadmin hardware refresh, for example), but I'm sure that people still buy some. If I'm lucky, there's still a good local store that does this and I can move on to thinking about what collection of hardware I'd want.
(Of course thinking about a new machine makes me irritated about ECC, which I'll probably have to live without.)
2016-07-28
A bit about what we use DTrace for (and when)
Earlier this year, Byran Cantrill kind of issued a call for people to talk about their DTrace success stories. I do want to write up a blog entry about all of the times we've used DTrace to solve our problems, but it's clearly not happening soon, so for now I want to stop stalling and at least say a bit about the kind of situations we use DTrace for.
Unlike some people, we don't make routine use of DTrace; it's not a part of ongoing system monitoring, for example. Partly this is because our fileservers spend most of their time not having problems. When stuff sits there quietly working, we don't need to pay much attention to it. There's probably useful information that DTrace could gather for us on an ongoing basis, but we just don't use it that way at the moment.
What we do use DTrace for is deep system investigations during problems and crises. Some of this is having scripts available that can do detailed monitoring of areas of interest to us; when an NFS fileserver problem appears, we can start by firing up our existing information collection scripts. A lot of the time we have merely ordinary problems and the scripts will tell us what they are (a slow disk, a user pushing a huge volume of IO, etc). Some of the time we have extraordinary problems and the existing scripts just let us rule things out.
Some of the time we have a new and novel problem, or even a crisis. In these situations we use DTrace to dig deep into the depths of the live kernel and pull out information we probably couldn't get any other way. This tends to be done with ad hoc hacked together scripts instead of anything more carefully developed; as we explore the problem we find questions to ask, write DTrace snippets to give us answers, and iterate this process. Often the questions we're asking (and the answers we're getting) are so specific to the current problem and our suspicions that there's no point in cleaning the resulting scripts up; they're the equivalent of one-off shell scripts and we'll almost certainly never use them again. DTrace is only one of the tools we use in these situations, of course, but it's an extremely valuable one and has let us understand deep issues (although not always solve them).
(Some of the time an ad hoc tool seems useful enough to be turned into something more, even if it turns out that I basically never use it again.)
2016-07-27
When 'simple' DNS blocklists work well for you
I've written about how we can divide DNS blocklists into 'simple' and 'complex' ones, where simple DNSBLs basically list things based on them sending spam or other bad stuff without trying to do more complex things like assess how much legitimate traffic also comes from the source. To put it one way, if a DNSBL lists one of GMail's outgoing SMTP servers because it sent some spam, it's almost certainly a simple one. I also said that rejecting email based on a simple DNSBL isn't necessarily a mistake, so it's time to explain that.
Suppose that you have a mail system that generally receives a low volume of legitimate email; for example, you might be operating a personal email server. Suppose that you also start getting spam. Spammers almost never go away, so your spam volume is very likely to trend up over time and reach a point where most of your incoming email is spam. In this environment, a listing in a simple DNSBL is a fairly strong confirmation signal that this new email is really spam. It's much likely that you're getting spam email from an IP that's been detected as spamming than that an innocent person has chosen to send you legitimate email from an IP that also sent spam and got listed in the DNSBL. The latter could happen, but the odds are low.
We've sort of seen this before. If the legitimate email rate is low and the DNSBL's 'false positive' rate on it is also low, the odds that a positive signal from the DNSBL means that an email is spam is very high. You can make the odds even higher by whitelisting known good sources.
(Of course anti-spam precautions aren't evaluated purely on percentages; the absolute number of legitimate messages blocked matters. Here the low volume helps, as there just aren't that many legitimate emails to get blocked.)
Similar logic can be applied to a lot of anti-spam heuristics; many
things look good when they're dealing with a stream of email that's
mostly or almost entirely spam. Block on bad EHLO greetings? Sure,
why not, especially since GMail and the other big people do generally
get those things right.
(GMail will send you spam too, of course, but statistically a new legitimate sender is much more likely to be using GMail or one of the other big places than an email server in the middle of nowhere. And yes, there are downsides to too many people adopting this sort of attitude to both heuristics and new mail sending machines in surprising places; ask anyone trying to send personal email from a new small home mail server and get it accepted by places.)
2016-07-25
An irritating systemd behavior when you tell it to reboot the system
For reasons well beyond the scope of this entry, I don't use a
graphical login program like gdm; I log in on the text console and
start X by hand through xinit (which is sometimes annoying). When I want to log out, I cause the
X server to exit and then log out of the text console as normal.
Now, I don't know how gdm et al handle session cleanup, but for me
this always leaves some processes lingering around that just haven't
gotten the message to give up.
(Common offenders are kio_http_cache_cleaner and speech-dispatcher
and its many friends. Speech-dispatcher is so irritating here that
I actually chmod 700 the binary on my office and home machines.)
Usually the reason I'm logging out of my regular session is to
reboot my machine, and this is where systemd gets irritating. Up
through at least the Fedora 24 version of systemd, when it starts
to reboot a machine and discovers lingering user processes still
running, it will wait for them to exit. And wait. And wait more,
for at least a minute and a half based on what I've seen printed.
Only after a long timer expires will systemd send them various
signals, ending in SIGKILL, and force them to exit.
(Based on reading manpages it seems that systemd sends user processes
no signals at all at the start of a system shutdown. Instead it
probably waits TimeoutStopSec, sends a SIGTERM, then waits
TimeoutStopSec again before sending a SIGKILL. If you have a
program that ignores everything short of SIGKILL, you're going
to be waiting two timeout intervals here.)
At one level, this is not crazy behavior. Services like database engines may take some time to shut down cleanly, and you do want them to shut down cleanly if possible, so having a relatively generous timeout is okay (and the timeout can be customized). In fact, having a service have to be force-killed is (or should be) an exceptional thing and means that something has gone badly wrong. Services are supposed to have orderly shutdown procedures.
But all of that is for system services and doesn't hold for user
session processes. For a start, user sessions generally don't have
a 'stop' operation that gets run explicitly; the implicit operation
is the SIGHUP that all the processes should have received as the
user logged out. Next, user sessions are anarchic. They can contain
anything, not just carefully set up daemons that are explicitly
designed to shut themselves down on demand. In fact, lingering user
processes are quite likely to be badly behaved. They're also
generally considered clearly less important than system services, so
there's no good reason to give them much grace period.
In theory systemd's behavior is perhaps justifiable. In practice, its generosity with user sessions simply serves to delay system reboots or shutdowns for irritatingly long amounts of time. This isn't a new issue with systemd (the Internet is full of complaints about it), but it's one that the systemd authors have let persist for years.
(I suspect the systemd authors probably feel that the existing ways to change this behavior away from the default are sufficient. My view is that defaults matter and should not be surprising.)
When I started writing this entry I expected it to just be a grump,
but in fact it looks like you can probably fix this behavior. The
default timeout for all user units can be set in /etc/systemd/user.conf
with the DefaultTimeoutStopSec setting; set this down to less
than 90 seconds and you'll get a much faster timeout. However I'm
not sure if systemd will try to terminate a user scope other than
during system shutdown, so it's possible that this setting will
have other side effects. I'm tempted to try it anyways, just because
it's so irritating when I slip up and forget to carefully kill
all of my lingering session processes before running reboot.
Update: I'm wrong. Setting things in user.conf does nothing
for the settings you get when you log in.
(You can also set KillUserProcesses in /etc/system/logind.conf,
but that definitely will have side effects you probably don't
want,
even if some people are trying to deal with them anyways.)
I should learn more about Grub2
I have a long-standing dislike of Grub2 (eg, and). Ever since I started having to deal with it I've felt that it's really overcomplicated, and this complexity makes it harder to deal with. There's a lot more to know and learn with Grub2 than there is with the original Grub, and I resent the added complexity for what I feel should be a relatively simple process.
You know what? The world doesn't care what I think. Grub2 is out
there and it's what (almost) everyone uses, whether or not I like
it. And recent events have shown me that
I don't know enough about how it works to really troubleshoot
problems with it. As a professional sysadmin, it behooves me to fix
this sort of a gap in my knowledge for the same reason that I
should fix my lack of knowledge about dpkg and apt.
I'm probably never going to learn enough to become an expert at
Grub 2 (among other things, I don't think there's anything we do
that requires that much expertise). Right now what I think I should
learn is twofold. First, the basic operating principles, things
like where Grub 2 stores various bits of itself, how it finds things,
and how it boots. Second, a general broad view of the features and
syntax it uses for grub.cfg files, to the point where I can read
through one and understand generally how it works and what it's
doing.
(I did a little bit of this at one point, but much of that knowledge has worn off.)
Unfortunately there's a third level I should also learn about. Grub2
configurations are so complex that they're actually mostly written
and updated by scripts like grub2-mkconfig. This means that if I
want to really control the contents of my grub.cfg on most systems,
I need to understand broadly what those scripts do and what they
get controlled by (and thus where they may go wrong). Since I don't
think this area is well documented, I expect it to be annoying and
thus probably the last bit that I tackle.
(If I cared about building custom grub2 configurations, it should be the first area. But I don't really; I care a lot more about understanding what Grub2 is doing when it boots our machines.)
2016-07-24
My view on people who are assholes on the Internet
A long time ago, I hung around Usenet in various places. One of the things that certain Usenet groups featured was what we would today call trolls; people who were deliberately nasty and obnoxious. Sometimes they were nasty in their own newsgroups; sometimes they took gleeful joy in going out to troll other newsgroups full of innocents. Back in those days there were also sometimes gatherings of Usenet people so you could get to meet and know your fellow posters. One of the consistent themes that came out of these meetups was reports of 'oh, you know that guy? he's actually really nice and quiet in person, nothing like his persona on the net'. And in general, one of the things that some of these people said when they were called on their behavior was that they were just playing an asshole on Usenet; they weren't a real asshole, honest.
Back in those days I was younger and more foolish, and so I often at least partially bought into these excuses and reports. These days I have changed my views. Here, let me summarize them:
Even if you're only playing an asshole on the net, you're still an asshole.
It's simple. 'Playing an asshole on the net' requires being an asshole to people on the net, which is 'being an asshole' even if you're selective about it. Being a selective asshole, someone who's nasty to some people and nice to others, doesn't somehow magically make you not an asshole, although it may make you more pleasant for some people to deal with (and means that they can close their eyes to your behavior in other venues). It's certainly nicer to be an asshole only some of the time than all of the time, but it's even better if you're not an asshole at all.
This is not a new idea, of course. It's long been said that the true measure of someone's character is how they deal with people like waitresses and cashiers; if they're nasty to those people, they've got a streak of nastiness inside that may come out in other times and places. The Internet just provides another venue for that sort of thing.
In general, it's long since past time that we stopped pretending that people on the Internet aren't real people. What happens on the net is real to the people that it happens to, and nasty words hurt even if one can mostly brush off a certain amount of nasty words from strangers.
(See also, which is relevant to shoving nastiness in front of people on the grounds that they were in 'public'.)