Wandering Thoughts

2016-12-05

One advantage of 'self-hosted' languages

One of the things that people like to do with languages (and language runtime environments) is to make them 'self-hosted'. A self-hosted language is one where the compiler (or interpreter) is almost entirely written in the language itself, instead of being written in another language such as C.

I don't know all of the reasons that people have for self-hosting languages, since I've never participated in language development. But from an outsider's perspective, I can think of one fairly obvious reason to want to self-host your language, which is that it probably increases the number of people who can work on your compiler by reducing what they need to know.

To work on a language (or its runtime), you generally need to know the language itself, and obviously you need to know the language that the compiler or interpreter or runtime is written in. When language X is written in language Y, this means that you need to know both X and Y. When language X is written in itself, you only need to know X. And if you're interested in working on something involving language X you probably know the language.

(In theory you could imagine situations where people who know only language Y could improve the compiler for language X by working on internals with well-defined semantics, like symbol table handling or the like. In practice I think that the people who will be interested in doing such work in the first place are people who are interested in language X.)

Sidebar: The case of LLVM as the exception that proves the rule

LLVM is an increasingly popular compiler backend written in C++ that is used by a number of languages, for example Rust. Obviously this means that these languages aren't self-hosted and probably will never be; self-hosting would require them to duplicate on their own a significant amount of the time and effort that have been poured into LLVM.

That statement right there is, I think, a big reason why people use LLVM. When you use LLVM as your compiler backend, you get to tap into all of the work that other people have done on it (and will continue to do in the future). You get a free ride on a high quality compiler backend, and for some projects this free ride is definitely worth some narrowing of the pool of contributors to your language's front end.

What makes the LLVM situation work in your favour is that it's a shared backend and so attracts people to work on it who don't care about your language (and who probably don't know anything about it; they can work on the LLVM backend despite this because it has well defined interfaces and APIs). An un-shared compiler or interpreter backend doesn't get this advantage.

programming/SelfHostingLanguageAdvantage written at 01:16:03; Add Comment

2016-12-04

Terminals are not enough (personal edition)

Over the years I've used any number of things at home to get access to the outside world, and as you'd expect I've developed some opinions on the whole area (to go with my sysadmin-side opinions). Because I love giving names to things despite being not very good at it, I'm going to divide all of this into four levels:

  • pure terminals not only depend completely on the Internet (or the outside world in general) but they restrict you to accessing it via a fixed sort of connection. The canonical terminal is a (dumb) serial terminal, where you type characters to the remote end and that's it. Chromebooks are sort of modern examples of this, at least in the version of them where you're restricted to the web and SSH.

  • terminals with local apps are still entirely dependent on the Internet (if your Internet connection goes away, they're paperweights), but you can run a wide variety of local clients to talk to things over the Internet in various ways. While you might think that this is not a big change from pure terminals, a terminal with local apps at least feels a bunch nicer and is often much more responsive and a richer experience.

    (Chromebooks with Android apps are going to mostly fall into this category.)

  • devices with local data let you have data locally so that you can still read something, listen to music, look at your notes, consult maps, or whatever when you're completely disconnected from the Internet. It's better than nothing, but you're mostly or entirely going to be a passive consumer of things while disconnected.

    My smartphone is essentially in this category today, because I saved a few free books and other things for offline use just in case (and why not), and of course people put music collections on them.

  • computers with local workspaces allow you to actively and more or less fully work when disconnected from the Internet. You don't just have local data and local storage, you have the tools you need to work on it much as you would if the Internet was available. You'll probably miss the Internet, but it's possible to be productive and get interesting things done; you can write code or blog entries or the like in a full-featured environment.

These levels are points on a continuum, especially now that so many resources even for things like programming are found online (eg, documentation). And certainly nowadays which level something goes in depends partly on how you use it, although some usages are more natural than others on various devices.

(An iPhone is an example of this. In theory you can probably use one at all levels, even up to the local workspaces level for certain sorts of work. But in practice it seems that it's mostly designed for something in the area between terminal with local apps and device with local data. If you use an iPhone without apps, you're missing out; if you try to do a lot of active work instead of passive consumption while totally disconnected, you're probably at least a bit out of the mainstream and not all apps will readily support that.)

For home use, I'm no longer willing to be completely dependent on the Internet and, while the local data approach is okay in a pinch it's not what I want for my primary home machine. Even if I only rarely use the possibility of working locally when the Internet is completely out, I know myself and I'm much happier when I know it's at least a possibility.

In addition, there are times that I deliberately choose to do some work locally even when the Internet is available. Sometimes this is because of interface issues covered last time around but sometimes this is for other reasons; I just feel like doing things locally, or working locally makes a convenient way to separate experiments from upstream work, or I'm (only) going to use the results locally as private versions of apps like my syndication feed reader.

(Of course, I wouldn't buy any sort of laptop as my primary home machine, Chromebook or otherwise, because I value a good display, a good keyboard, and a good mouse much more than I value portability. As a secondary machine for taking places and using in emergencies if the main home machine explodes, Chromebooks have the useful property of being inexpensive, and in an emergency it's true that much of what I care about in day to day usage is on the Internet. But if money was not a consideration, even my 'carry places and use in emergencies' laptop would be in the full local workspace category and so probably not a Chromebook.)

tech/TerminalsVsOtherUsagePatterns written at 01:43:45; Add Comment

2016-12-03

One reason why rogue wireless access points are a bigger risk at universities

One of my opinions on rogue wireless access points is that they're a bigger risk at universities than they probably generally are at companies. Even if they were set up at exactly the same rate and in the same relative places in your network at a company and at a university, the university has it worse.

The problem for universities in specific is that so much of our building space is in practice open to the public. Sure, theoretically maybe it's only supposed to be used by university students, staff, professors, and officially approved visitors, but in practice there are no badge checks or other bureaucracy at building entrances. You can stroll right in and walk around in the corridors of almost any building, and those open corridors will generally get you fairly close to almost all space in our buildings.

(Specific office space, lab space, and so on is often closed off with locks and other access restrictions that you can't get past. But buildings themselves are rarely locked up during working hours.)

This makes a random WAP with a short range much more accessible to an outsider, especially a casually equipped one. You can basically go 'war-walking' around our buildings to see what turns up on your wireless device, and if you find anything interesting you can lean up against the wall while you poke at your phone, laptop, or whatever and no one is going to look at you twice.

(It's completely routine for me to find a group of people all sitting on the floor in a corridor, clustered around a laptop and some electronics. I believe they're Engineering undergrads working on projects but I don't exactly know for sure, and no one's likely to challenge yet another group that looks like that.)

My impression is that companies generally try for much more control of physical access to their space than this. Entrance to buildings and large areas inside buildings is actively controlled, with security and badges and so on, and as a result you can't easily just stroll in off the street and wander around. The practical effect is that a rogue WAP in a room somewhere inside the building is much less accessible to outsiders because they can't easily get close enough to it to use it (or even to know that it's there).

(It's probably not completely inaccessible to an attacker with an enhanced antenna and other gadgets. But at least the attacker has to work harder and get somewhat luckier, and you're at the point that they've decided to actively target you with what that implies.)

PS: The same physical access issues apply to any wired network drops that are in open areas or areas that are merely left accessible and unattended on a routine basis (such as meeting rooms, although those are often locked when not in use for various reasons). But generally the risks here are relatively clear to the people putting in the drops and connecting them up to whatever network they're connected to.

sysadmin/UniversityRogueWAPAccessProblem written at 00:10:45; Add Comment

2016-12-02

IPv6, point to point links, and subnet lengths

One of the things that my recent IPv6 work has given me is plenty of what we call 'learning experiences'. The latest one concerns a little detail of what I wrote earlier, where I casually said:

[...] I discovered my next configuration mistake, which was the subnet length on my IPv6 address configured on my DSL PPPoE link; for reasons lost in the depths of history, it had been configured with a /64 subnet length instead of being set to be a single IPv6 address. [...]

After I found and fixed that, at last everything worked [...]

That turned out to be a little bit optimistic.

In Linux IPv4 networking, you can definitely have the same IPv4 address attached to an Ethernet interface, with a /24 netmask, and to a point-to-point link, as a /32 single address. I have essentially this setup today on one machine with my IPSec tunnel. This works reliably and I've never had any problems with it.

As far as I can tell, this is not true of IPv6 on Linux. I believe that some of the time you can do this, because I'm pretty sure that I managed to do it at the time that I wrote that first entry. However, some of the time it appears that you can't, or at least I can't; if I have a given IPv6 address on an Ethernet interface as a /64 and I try to put it on a PPP interface as a /128, it quietly gets converted into a /64 (and then things explode, as before). This state seems to be at least somewhat sticky, in that I couldn't fix it with manual use of the ip command; instead I got various puzzling error messages (which I neglected to write down, because I was focused on solving the problem instead of writing a blog entry).

(I think it may have been 'RTNETLINK answers: Cannot assign requested address' when I was trying to delete the /64 IPv6 off the PPP link. In hindsight, maybe this meant that parts of the system thought it wasn't a /64 and parts felt otherwise.)

Since IPv6 addresses are extremely plentiful, my solution was simple; I just gave my inside Ethernet interface a different IPv6 address in my /64. This seems to have made everything happy, although it made me shuffle a few things around in my overall configuration; I changed Unbound to listen on this IPv6 address instead of the PPP one, and then I changed radvd to give out this address as the RDNSS address. My PPP link now has my 'router' IPv6 IP as a /128, my Ethernet has a /64, and things still do SLAAC and can talk to the world via IPv6.

(In thinking about it, possibly things would have worked just as well without changing the Unbound IPv6 address. After all, the 'router' IPv6 address is still perfectly reachable, it's just not the address associated with the Ethernet interface.)

One of the things I've discovered as a result of this is that I don't actually understand how IPv6 interacts with point to point links. A conventional IPv4 PtP link is intrinsically unicast to the peer IP; there is nothing else there, so a netmask doesn't really make sense in general. This is clearly not how my IPv6 PPP link is working; for a start, ifconfig and ip don't list a peer IPv6 address (and the link has a link-local IPv6 address too).

In fact now I wonder if my PPP link needs to have a public IPv6 address associated with it at all, or it's enough to have my default IPv6 route pointed through it. In the old days I had to put my public IPv6 address on the PPP link because I wasn't putting it anywhere else, but that's not applicable now that I'm also putting an IPv6 address on my Ethernet interface.

(Some brief testing suggests that it doesn't need a public IPv6 address. So it may be that I have been doing this totally backwards from the start and I'm now very slowly and incrementally evolving my configuration towards what it should be to be proper.)

I once did some reading about how IPv6 worked, but clearly not enough of it stuck in my head. I should probably do it again, although it's hard to feel motivated to take another slog at a large block of information that didn't stick well enough the first time around.

linux/IPv6PointToPointNetmask written at 00:34:43; Add Comment

2016-11-30

I suspect that lots of IPv6 hosts won't have reverse DNS

It's an article of faith that IPv4 hosts should mostly have valid reverse DNS and that good sysadmins (and people in general) should set this up for their hosts. While support for this is not exactly universal, it is reasonably common and many places have it. However, in light of my recent experiences with IPv6 I've come to believe that a significant number of IPv6 hosts will probably not have reverse DNS.

When I was thinking that IPv6 hosts would acquire addresses through DHCP(6), reverse DNS seemed as feasible for them as it does for IPv4 hosts that also use DHCP (although I had questions about what the names would be). It'd be somewhat more annoying to set up (because IPv6 PTR records are longer), but doable and even routine. But Android means that that's not really on, since Android (including Chromebooks) does not support DHCP6 at all; instead, hosts will acquire and use IPv6 SLAAC addresses, which means at least a couple of addresses per host, spread randomly and relatively unpredictably over your IPv6 PTR space. Worse, an increasing number of IPv6 hosts are likely to use temporary addresses for privacy reasons.

Given SLAAC and temporary addresses, it seems that the only particularly feasible reverse DNS for such IPv6 /64s is completely bland generic reverse DNS. At that point, why bother? The reverse DNS is serving almost no meaningful purpose apart from, well, having reverse DNS, and presumably you need special DNS servers that don't try to materialize all of those records in memory, just fill them in from a template when queried. (This has implications for your secondary DNS servers.)

(It's possible that some form of dynamic DNS could sort of fix this, but dynamic DNS is its own additional layer of complexity and I don't know if IPv6 hosts politely tell you what SLAAC IPv6 addresses they have decided to use so that you can add them to DNS on the fly.)

At this point, if someone at my ISP offered to delegate the necessary reverse DNS zone for my IPv6 /64 to me so that I could have proper reverse DNS for it, I would just laugh. Setting up and maintaining even entirely generic IPv6 reverse DNS would be too much of a hassle. I might someday have reverse DNS for a few static IPv6 addresses for things like my home Linux machine, but not for all of those SLAAC and temporary addresses from things on my home wireless network. And I doubt I'm going to be alone here.

(Possibly this is already obvious to anyone who has ever done much with IPv6. As you can tell, I'm just getting my feet wet here, mostly by throwing myself in the water when random new pieces of hardware show up instead of any systematic process of exploration and education.)

PS: In a real IPv6 deployment I'd still do reverse DNS for static IPv6 addresses. I think that this is useful internally, for the obvious reason that meaningful names are easier to recognize than addresses. If you're looking at internal logs, 'connection from <host X>' is obviously easier to follow and use than 'connection from <IPv6 address>', so it's worth some effort to get it.

Sidebar: Why generic reverse DNS is not very useful internally

The purpose of names is to give short, recognizable labels to things. Dynamically generated reverse DNS can sometimes do this, but generally only when it's relatively sparse and you can have names like 'dhcp0-NNN.red.sandbox' or 'unregistered-NNN.red.sandbox'. When you have to densely populate the reverse DNS namespace and generate names like '76.103.17.172.red.sandbox', the names are not really helping you very much.

All forms of SLAAC addresses are going to be relatively randomly distributed over your IPv6 /64; that's why you have to give SLAAC an entire /64. To make sure every randomly chosen, randomly distributed address has reverse DNS, you must densely populate the reverse DNS namespace, which means that you are not going to wind up with meaningful short labels. The label will just tell you what IPv6 address it has.

You can use generic reverse DNS for IPv6 addresses to tell you what network the IPv6 address is on. In this case the meaningful content of the name is actually the '.red.sandbox' part and you might as well have the hostname be as compact an encoding of the last /64 of the IPv6 address as possible.

sysadmin/IPv6LikelyMissingReverseDNS written at 22:27:51; Add Comment

Terminals are not enough (sysadmin version)

In a comment on yesterday's entry, Evaryont asked a good question:

Have you thought about the usecase of Chromebooks not as a fully independent machine, but something more akin to a thin-client remote access device? [...]

(The difference between thin clients and Chromebooks is important, but I'll get to that later.)

Let us call such devices 'terminals', because that is what they are. In the old days terminals worked over serial lines (and still might support windows); in the modern day the Chromebook has a browser and a SSH client to handle your remote access needs.

Using terminals sounds attractive for many system administrators (and some developers); after all, we work on servers anyways, often off in some distant datacenter or cloud provider. Periodically someone gives it a good try and for some people it can even work reasonably well. However, my view is that terminals are not enough in the long run because the interface they give you is limited and narrow and cannot match the richness possible with genuine local computing.

(I'll assume that we don't care about working when you have no network connection or the servers are unavailable. If this is an issue then a terminal is not an option in the first place.)

In the concrete, the interface you can deliver through a web browser doesn't match what you can do with a native client with full GUI access. Part of this is because of necessary browser limitations, but part of this is because the browser must work through a web server on the remote end; it can't easily sit directly interacting with the system and the rest of the things you're doing in the way that a graphical program working side by side with the rest of your session can. 'Emacs in a browser' is perhaps theoretically possible but is clearly much more awkward under the hood than 'emacs file'. The same is even more true of what you can do through a text terminal instead of a GUI.

(Remote desktop software doesn't solve this; instead it essentially turns your terminal into a display. What you really want is remote windows and even that is crucially dependent on a fast, low-latency network and a capable display and event protocol.)

More generally, a terminal is always going to make shared context harder and more awkward to create simply because that context is at the other end of things. Even with your terminal having multiple windows, you need a new login session or a new browser window or the like, and they have to be set up and established. I have an entire elaborate environment designed to make this really easy and it is still not as fluid as it is for local things (cf).

You can get a lot done through a terminal. It's not terrible. Much of the time you may not really notice what you're missing. But you are missing things, and every so often you'll run into the limits of what you can readily do by reaching through the narrow, constrained interface to the remote end that the terminal inevitably imposes on you simply by virtue of having a 'remote end'.

(This entry is purely from a work perspective for sysadmin work. For my personal use I have an additional separate set of views.)

PS: In the past I might have argued that terminals are also less customizable than true local computing. However, most people's local computing is getting less and less customizable all the time, so I no longer feel that this is a clearly winning argument in general. Something like my highly customized desktop environment and its special tools is clearly a far outlier today.

Sidebar: Thin clients versus terminals

The difference is simple: thin clients are really remote displays, not terminals. If the protocol is powerful and the connection fast enough, it's basically like being directly connected to the server. The obvious drawback is that it takes that powerful protocol and fast connection (and much more server resources). The less obvious drawback is that it's not at all obvious how to multiplex things so that you can sensibly connect to multiple servers at once.

Modern terminals like Chromebooks solve the multiplexing issue by being 'remote windows' instead of remote displays; each window or browser tab is a separate context and can be going to a different place. In Chromebooks et al, one of the drawbacks is that each browser 'remote window' is significantly less powerful than a full GUI program could be. This isn't a completely intrinsic drawback, as remote X over ssh sort of demonstrates.

(I believe that you can have a spirited discussion over whether any remote windowing protocol, X or otherwise, is up to the sophisticated demands of modern compositing OpenGL/3D interfaces with audio and synchronized double buffering and so on, or whether such remote protocols are intrinsically too slow and incapable of equaling local GUIs even for straightforward programs. I don't know enough about the state of the art to have an opinion.)

sysadmin/TerminalsAreNotEnough written at 02:08:37; Add Comment

2016-11-28

Some thoughts about options for light Unix laptops

I have an odd confession: sometimes I feel (irrationally) embarrassed that despite being a computer person, I don't have a laptop. Everyone else seems to have one, yet here I am, clearly behind the times, clinging to a desktop-only setup. At times like this I naturally wind up considering the issue of what laptop I might get if I was going to get one, and after my recent exposure to a Chromebook I've been thinking about this once again.

I'll never be someone who uses a laptop by itself as my only computer, so I'm not interested in a giant laptop with a giant display; giant displays are one of the things that the desktop is for. Based on my experiences so far I think that a roughly 13" laptop is at the sweet spot of a display that's big enough without things being too big, and I would like something that's nicely portable.

Synthesized from various sources, I seem to have three decent choices and a not so great one:

  • A Chromebook, either running Chrome OS or reinstalled with Linux. Even after buying a larger SSD myself, Chromebooks appear to be clearly the cheapest way to get a light 13" laptop if I don't need particularly much CPU performance. At least one higher end Chromebook is available with a 3200x1800 'QHD+' display and has options for more than 4 GB of RAM (which could make it more useful as a Linux machine).

    A Chromebook is the low cost option but also the least useful as a standalone machine. As a ChromeOS machine it's probably mostly an Internet terminal (even once you add Android apps). Running Linux it would still be relatively slow and unlikely to be suitable for things like processing photos or doing much in the way of programming. Running a lighter weight Unix and desktop environment (FreeBSD or even OpenBSD) might help a bit, but compilers and photo editors and so on have the same CPU demands no matter what Unix they're running on.

    (According to Passmark, my vintage 2011 machine has a CPU that totally eclipses even the higher end Dell Chromebook 13 CPU. This is not really surprising given the big difference in TDP between my desktop CPU and the mobile CPUs that are going to go into a Chromebook.)

  • A Macbook of some sort. This is the obvious way to get a broadly Unix environment on a laptop that should just work without having to worry about power management, wireless chipsets, graphics support, and this and that and the other. Drawbacks include reports that the keyboard on recent Macbooks is not very nice (although I'm not sure that people are comparing it to, eg, Dell laptop keyboards). Advantages include that it would work well with my iPhone and I could get what is now my favorite Twitter client. Macbooks can be had with Retina displays if you pick the right model.

    This would be the largest change from my current environment. The Macbook command line environment may be Unix but the GUI and the programs I would use there is completely different (and I would use the GUI, because I like GUIs). There might be problems with conveniently using Yubikeys for SSH, too.

    A clear advantage of a Macbook is that it gets me access to the universe of Mac (GUI) software, including commercial software for things like RAW photo processing. I'm fond of my chosen photo processor, but for photography in specific using Linux is definitely taking the harder and generally not as good road.

    (You can argue about whether Macbooks really qualify as Unix machines. My answer is that they clearly do as far as command line usage is concerned and that covers a lot of what I want, and I could also get eg GNU Emacs and thus tools like Magit. My evolving views here do not fit in the margins of this entry.)

  • A (Windows) ultrabook (re)installed with Linux, such as the current Dell XPS 13 model or other similar machines. This is not the low cost option but I would get a pretty capable machine that ran my Linux laptop environment, had a good battery lifetime (although not as good as a Chromebook's), could be had with a high resolution display (beyond FHD), and so on. At least some ultrabooks apparently have more or less complete Linux support, making this the most obvious and straightforward option.

    It's not inexpensive, though. I'd be paying a fair bit to have a light 13" laptop that was merely a decently capable Linux machine (probably with a high resolution display, though, because if I'm going to splash out for an ultrabook I should go all the way to a nice screen).

    (As far as I can tell from Passmark, current higher end ultrabook CPUs approach but don't really pass my 2011 home machine's CPU. On the one hand I guess thermal limits are hard; on the other hand, it's impressive that they're almost managing what once took 95 TDP in a 15 TDP power budget.)

The not so great option:

  • A budget Windows laptop reinstalled with Linux. I call this not so great because I've read that budget laptops are not as slim and light as ultrabooks (as well as not being as powerful). I'd wind up with a bigger, heavier, clunkier machine that worked reasonably well, one that was moderately more powerful than a Chromebook and had more memory and hopefully cost me not too much more. I don't think budget laptops have better than FHD displays and I wouldn't expect the keyboards, trackpads and so on to be as nice as on an ultrabook.

    This is the kind of option that I'd expect work to pick. It's not all that attractive if I'm spending my own money.

If I want a reasonably slim and light 13" laptop, my impression is that the first three (Chromebook, Macbook, ultrabook) are my only choices.

Any of these should handle light travel for conferences and the like, where my main interest is connecting back to 'home' to read email, write Wandering Thoughts entries, and so on. I'd expect only ultrabooks or a Macbook to handle more demanding travel situations such as processing RAW photos or letting me work on personal coding projects of any decent size. And if I was taking a laptop off to some sort of training where we were expected to spin up virtual machines or the like (as happened to a co-worker recently), a Chromebook is not suitable.

(Of course if I'm going off to training that needs a capable laptop, work should be providing it and it might need to run Windows anyways.)

When thinking about things like this, I find it useful to ask myself what I would do if money wasn't a consideration. Given what's available today I think the answer would be 'buy an ultrabook with a QHD+ display'. But if I was even really considering getting a laptop, I think the actual answer would be 'wait, because changes in laptops and Chromebooks are probably coming soon'.

My current view, biased by a long standing desire for high-resolution displays (more exactly high DPI displays), is that spending a bunch of money to get a machine with merely a FHD display seems like a waste. If I had to settle for a FHD display, it would be tempting to minimize the cost by going with a Chromebook. I'm sure I could do things like moderate Go coding even on a Chromebook; I've used slower machines in the past. Heck, my current work laptop is one of the lower end Thinkpad T60s.

(I'm not actually thinking of getting a laptop because I know perfectly well that feeling embarrassed about not having one is a silly reaction to have here. I don't have a laptop in large part for the simple reason that I don't have much use for one today; I don't travel and I almost never do work or other computer stuff outside of my office and home.)

unix/UnixLightLaptopOptions written at 23:10:30; Add Comment

Some impressions after a brief exposure to a Dell Chromebook 13

I've had a Dell Chromebook 13 hanging around here for the last few days and although I haven't used it too much, I still want to note down my initial impressions about it (while I still have the machine here) for various reasons beyond the scope of this entry.

My overall impression of the actual machine can be summed up as 'inoffensive'. It has a reasonable sized screen that looks good in casual tests, a keyboard that has not irritated me when I've typed on it, enough performance to do casual things without feeling laggy (including playing full-screen streaming video), and built in sound that seems fine for casual listening if I'm right in front of the machine. I don't know how I feel about the trackpad but I expect it works basically like all modern trackpads work; my impression is that things like distinct physical buttons for the mouse buttons (especially three of them) are pretty much out on most machines. If I used the machine regularly I might want to get a Bluetooth mouse, but maybe not; I'd have to use it and see. Physically it's a 13" laptop but not a particularly bulky or heavy one; I could imagine carrying it around.

As far as Chrome OS goes, well, again, I have to score it as 'inoffensive'. It has overlapping windows if you ask it nicely, you can install uBlock Origin into the browser, and so on. Everything mostly works the way I expect it to in the windowing environment, even if Chrome OS appears to like using relatively small fonts (you can sort of change that). There is a certain amount of access to the underlying Unix environment through magic tricks like chrome://system and Ctrl + Alt + t to get a crosh terminal (and apparently more if you enable developer mode, cf). Google seems to have a bunch of help resources (as do other people) and once you go digging you can do a number of things with CrOS (although it has limits).

I'm not really the target audience for a standard CrOS laptop, but so far I've wound up feeling that I could use it to get things done. In practice what I use my work laptop for is mostly browsers and SSH sessions to places. CrOS has a browser and you can do SSH in a browser addon and soon Android applications. It likely wouldn't be as nice as my customized Cinnamon environment and I suspect that CrOS simply doesn't support PKCS#11 hardware tokens like Yubikeys (at least for SSH, it may support them for browser stuff per this CrOS answer). But if I needed a travel laptop for going to a conference or the like, I could do worse than a Chromebook and it would probably be okay.

(In general, Android apps being usable on Chromebooks will likely make them significantly more useful for people like me.)

PS: If I wanted to, apparently you can replace Chrome OS with Linux on many Chromebooks, the Dell Chromebook 13 included. I'd probably want to swap in a bigger SSD. I don't know if Linux can use the hardware as well as CrOS, though, so using Linux might mean giving up battery lifetime and so on.

(I wouldn't choose to buy a Dell Chromebook 13 for myself right now, but that's partly because various reading has led me to expect all sorts of general Chromebook hardware updates in the future to add various features expected by Android apps, like touch screens, various sorts of sensors, and so on.)

tech/DellChromebook13BriefViews written at 01:13:54; Add Comment

2016-11-27

The Chromebook login problem

I mentioned yesterday that I have a Chromebook hanging around here at the moment. It's for a relative so I haven't played with it very much, but I have poked at it enough to spot one obvious issue: the login and password you use to log into and unlock the Chromebook is your Google account's password. This is a potential problem because the properties you'd like for the two passwords are relatively different.

My perception is that many people keep themselves logged in to their Google account. This means that they type the password infrequently, and may trust it to a password manager (and likely have a complex randomly generated password as a result). However, a machine password (Chromebook or otherwise) is something that you'll be typing relatively frequently in order to unlock the machine (probably at least multiple times a day); it's infeasible to use a complex randomly-generated password for this.

Whether you can change this is a frequently asked question and the answer appears to be 'no, that's how its designed'. I can see three approaches to the problem. First, the obvious one that gets suggested frequently is to set up a separate 'Chromebook' Google account, one that's only used for the Chromebook (and for things that you're treating as an extension of it, like cloud storage). To get access to your real Google account, just log into it in the browser on the Chromebook and so on. I think that you lose some amount of automatic synchronization between the Chromebook and your regular Google account, but apparently this works in general.

The next approach that I've seen recommended is to switch to using an xkcd style password. These are likely secure enough while still being memorable and reasonably easily typed; you can probably rattle one off in a few seconds, which is not too annoying even on the routine basis of unlocking a computer.

A third possible option is to configure strong 2FA on your Google account and not worry too much about the strength of your password alone. I'm not sure how this interacts with logging into the Chromebook itself, but it will at least protect your Google account in general (and everyone recommends it if you're serious about the security of your account). If I was going to use a Chromebook and I cared about my Google account beyond the Chromebook itself, this is probably the approach I'd go with (possibly combined with an xkcd style password).

(I'd want to explore Google's 2FA options and how they combine first, though. You can apparently use a Yubikey (or any hardware token that supports U2F) as one of your 2FAs, but how does that work if you want to also authenticate from a device that can't talk to the Yubikey (such as an iPhone)? Can you have a second 2FA method so that either the Yubikey's U2F or the other 2FA method are sufficient?)

tech/ChromebookLoginProblem written at 02:19:05; Add Comment

2016-11-26

What I did to set up IPv6 on my wireless network so it really worked

A couple of months ago I hacked together an IPv6 configuration for my home wireless network, which I wrote up in What I did to set up IPv6 on my wireless network. Today I completely ripped that configuration apart and put something back together again that does IPv6 totally differently and almost certainly rather better. In the process I discovered all sorts of mistakes I'd made the first time around, mistakes that I'm going to document for your amusement and my education.

The direct cause of completely changing my IPv6 setup around is that I have a Chromebook hanging around here at the moment. When it failed to talk to my IPv6 DHCP server, I knew I had some work to do, especially when the reason for this turned out to be that Android and ChromeOS don't do DHCP6 at all. So I started trying to make SLAAC work and things started falling over in ways that were familiar to me from my first attempt, except that this time I kept digging because I didn't have a choice.

There were some semi-minor things that were broken. To start with, my Unbound DNS server wasn't listening on the fe80:: link-local address I'd told the DHCP6 server to tell clients about, and even if it had been I had forgotten to tell it that it should allow clients in my IPv6 network block. As far as I can tell, Unbound silently ignores link-local addresses; I had to configure it to use my real IPv6 address (and to allow clients). This was actually a distraction, since all the clients were doing IPv6 lookups through the IPv4 DNS server IP that they were getting from plain (IPv4) DHCP, but when you're tcpdumping traffic and you see failed DNS lookups, well. Next, despite not acting as a router, my DSL router still was doing DHCP6. This can't have helped IPv6 life on my little network segment.

(I detected this through reports from radvd when I was fiddling its settings in an attempt to get things to work.)

The big problem turned out to be me shooting myself in my foot by deciding in my first attempt to use a subset /68 network prefix instead of my full /64. It turns out that radvd's complaint about 'enp7s0 prefix length should be: 64' that I mentioned in the postscript in my first attempt actually meant 'SLAAC requires a /64 and you told me to advertise SLAAC, so that's not going to work too well here'. Only once I stumbled over a Cisco web page (of all things) that mentioned this in passing did the light dawn.

(It didn't help that I didn't realize that my original radvd configuration enabled SLAAC, by saying 'AdvAutonomous on'. Nothing actually did SLAAC with that configuration because of the /68 versus /64 issue, but it was there latent in my settings.)

Now, the thing about SLAAC is that machines doing SLAAC will use IPv6 addresses from all over your /64. So very soon I noticed that I had set the IPv6 subnet for my Ethernet interface to /68 to match my radvd (and DHCP6) settings, which no longer covered all of the IPv6 SLAAC addresses I was going to see. After widening that to /64, I discovered my next configuration mistake, which was the subnet length on my IPv6 address configured on my DSL PPPoE link; for reasons lost in the depths of history, it had been configured with a /64 subnet length instead of being set to be a single IPv6 address. This appears to have caused all inbound IPv6 traffic for anything except my Linux machine to just quietly get thrown away (I assume because of routing confusion).

(I believe this worked in the past because my /68 internal interface was a more specific route than my /64 PPPoE link. When I widened the internal interface to a /64, they became equal and things got broken.)

After I found and fixed that, at last everything worked for both iOS and Android (well, ChromeOS, but Android too). Everything got SLAAC addresses, hosts were now doing DNS lookups to me over IPv6 (and having it work), and the iOS device was still doing DHCP6. But with SLAAC working for everyone, all that my DHCP6 was really doing was giving the DHCP6-capable hosts another IPv6 address, which they might or might not even use. So I decided I didn't care about the ability to ping my iOS device over IPv6 and turned off dhcpd6.

(I wound up configuring RDNSS in radvd in order to get the Android side of things to theoretically be happy with life. The practical results of this seem mixed; iOS and an Android device will do IPv6 DNS lookups, but the Chromebook doesn't.)

I have mixed feelings about SLAAC IPv6 addresses, but life is what it is. If I want to support Android and ChromeOS devices using IPv6 on my home wireless network (and I do), I don't have a choice.

(If I do per-client IPv6 filtering, I guess I'll do it based on MAC instead of IPv6 address. Probably I'll decide it's too much work and just apply blanket rules to all IPv6 traffic to 'inside' machines.)

Sidebar: my new radvd.conf, more or less

interface enp7s0
{
    AdvSendAdvert on;

    # AdvManagedFlag makes hosts try DHCP6 for
    # additional addresses.
    AdvManagedFlag on;
    # Get other stuff from DHCP6 (too).
    # Implied by AdvManagedFlag; we're
    # being explicit.
    # See RFC 2462 section 5.2
    AdvOtherConfigFlag on;

    prefix 2001:1928:1:7:f000::/64
    {
        AdvOnLink on;
        # Enable SLAAC
        AdvAutonomous on;
    };

    RDNSS 2001:1928:1:7:f000::[...]
    {
    };
};

In theory I could turn off AdvManagedFlag and AdvOtherConfigFlag. I'm keeping them in (for now) to reduce the configuration changes needed if I decide to re-enable IPv6.

linux/QuickWirelessIPv6SetupII written at 01:54:32; Add Comment

(Previous 10 or go back to November 2016 at 2016/11/25)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.