Wandering Thoughts archives


What the Linux rcu_nocbs kernel argument does (and my Ryzen issues again)

It turns out that my Linux Ryzen kernel hangs appear to be a known bug or issue (Ubuntu, Fedora, kernel); more fortunately, people have found magic incantations that appear to work around the issue. Part of the magic is some kernel command line arguments, usually cited as:

rcu_nocbs=0-N processor.max_cstate=1

(where N is the number of CPUs you have minus one.)

Magic incantations that I don't understand bug me, especially when they seem to be essential to keeping my system from hanging, so I had to go digging.

What processor.max_cstate does is relatively straightforward. As briefly mentioned in kernel-parameters.txt, it limits the C-states (also, also) that Linux will allow processors to go into. Limiting the CPU to C1 at most doesn't allow for much idling and power saving; it might be safe to go as far as C5, since the usual additional advice is to disable C6 in the BIOS (if your BIOS supports doing this). On the other hand, I don't know if Ryzens do anything between C1 and C6.

The rcu_nocbs parameter is more involved (and mysterious). To more or less understand it, we need to start with Read-Copy-Update (RCU) (also Wikipedia). To simplify, RCU handles updates to shared data structures by setting up a new version of the data structure, changing a master location to point to it instead of the old version, and then waiting for everyone to have passed a synchronization point where they're guaranteed to be using the new version instead of the old version. At that point you know the old version is unused and you can free it.

The Linux kernel's main RCU code handles the RCU algorithm for you but it doesn't know how to free up your data structures. For that it relies on RCU callbacks that you give it; when RCU determines that the old version of your data structure can be disposed of, it will invoke your callback to do this. Normally, RCU callbacks are invoked in interrupt context as part of software interrupt (softirq) handling. Various people didn't like this because softirqs preempt whatever happens to be running at the time whenever an appropriate interrupt happens, so people came up with an alternate approach of having these potentially quite time-consuming RCU callbacks handled by regularly scheduled kernel threads instead. This is said to 'offload' RCU callbacks to these threads. Each offloaded CPU gets its own set of RCU offload kernel threads, but these kernel threads can run on any CPU, not just the CPU they're offloading.

This is what rcu_nocbs controls; it's a list of the CPUs in your system that should have their RCU callbacks offloaded to threads. Normally, people use it to fence off a few CPUs from the random interruptions of softirq RCU callbacks.

(See here and here for more information and details.)

However, the rcu_nocbs=0-N setting we're using specifies all CPUs, so it shifts all RCU callbacks from softirq context during interrupt handling (on whatever specific CPU involved) to kernel threads (on any CPU). As far as I can see, this has two potentially significant effects, given that Matt Dillon of DragonFly BSD has reported an issue with IRETQ that completely stops a CPU under some circumstances. First, our Ryzen CPUs will spend less time in interrupt handling, possibly much less time, which may narrow any timing window required to hit Matt Dillon's issue. Second, RCU callback processing will continue even if a CPU stops responding to IPIs, although I expect that a CPU not responding to IPIs is going to cause the Linux kernel various other sorts of heartburn.

(Unfortunately, Matt Dillon's issue doesn't correspond well with the observed symptoms, where Ryzens hang under Linux not while busy but while idle. My kernel stack backtraces do suggest that at least one CPU is spinning waiting for its IPI to other CPUs to be fully acknowledged, though, so perhaps there is a related problem. Perhaps there are even several problems.)

KernelRcuNocbsMeaning written at 23:54:55; Add Comment


Doing something when a Cinnamon-based laptop suspends or hibernates

I've used encrypted SSH keys for some time. To make this tolerable (and even convenient), I load the keys into ssh-agent. On my desktop, I make this more secure by automatically flushing the keys when I lock the screen (details here). To get a similar effect on my laptop, I want to flush the keys before it suspends (suspending the laptop is roughly the equivalent of locking the screen on a desktop; it's almost always what I do before I walk away from it). I also want to force-close any lingering SSH connection masters, because it's pretty likely that my network connection will be different when I un-suspend the laptop (and in any case, the server end will probably have timed out).

In the beginning I did this by hand with a shell script or two. I usually remembered to run it before I suspended (my custom Cinnamon environment made it not too hard), but not always, and it was kind of a pain in general. Then I found out that it's possible to hook into the modern suspend process to automate this.

The important magic is that there is a standard freedesktop DBus signal that is emitted when your modern DBus and systemd-enabled system is about to suspend or hibernate itself. The DBus details are covered in this unix.se answer (via), and I simply copied and modified the Python code from David Newgas' av program to make something I call presusp.py. My version does not have a start() action to do things after the laptop un-suspends, and its shutdown() action is simply to run my shell scripts that drop SSH keys and cleans up shared SSH sessions. If I used my Yubikey more on Cinnamon (which is possible), I'd also run my script to drop the Yubikey from ssh-agent (covered here).

Because presusp.py is directly tied to my login session, I just start it in a shell script I already have that does various things to set up my Cinnamon session. This also terminates it when I log out, although usually if I'm going to log out I just power down the laptop.

(Logging out and then back in again has been somewhat flaky under Cinnamon for me.)

PS: According to the logind DBus API there's also a signal emitted before screen locking, although it comes from your session instead of the overall manager. If I cared enough, I could presumably hack up my presusp.py to flush keys even on screen lock. Right now this is too complicated for me to bother with, since I rarely lock the screen on my laptop and step away from it.

Sidebar: Restoring keys to ssh-agent after an unsuspend

Unlike on my desktop, I don't try to automatically re-add my encrypted keys to ssh-agent when the laptop un-suspends. Instead I have my .ssh/config set up with 'AddKeysToAgent yes', so that the first time I ssh somewhere it automatically adds the keys to the agent after prompting me to unlock and use them. This is a bit less convenient than on my desktop but it works well enough under the circumstances. It helps that I don't try to do fancy things with remote X clients on the laptop; mostly what I do is SSH logins in terminals.

CinnamonActOnLaptopSuspend written at 00:50:48; Add Comment


A recent performance surprise with X on my Fedora Linux desktop

As I discussed yesterday, on Monday I 'upgraded' my office workstation by transplanted my system disks and data disks from my old office hardware to my new office hardware. When I turned the new hardware on, my Fedora install booted right up pretty much exactly as it had been (some advance planning made networking work out), I logged in on the text console, started up X, and didn't think twice about it. Modern X is basically configuration free and anyway, both the old and the new hardware had Radeon cards (and the same connectors for my monitors, so my dual-screen setup wouldn't get scrambled). I even ran various OpenGL test programs to exercise the new card and see if it would die under far more demanding load than I expected to ever put on it.

(This wound up leading to some lockups.)

All of this sounds perfectly ordinary, but actually I left out an important detail that I only discovered yesterday. My old graphics card is a Radeon HD 5450, which uses the X radeon driver. My new graphics card is a Radeon RX 550, but things have changed since 2011 so it uses the more modern amdgpu driver. And I didn't have the amdgpu driver installed in my Fedora setup (like most X drivers, it's in a separate RPM of its own), so the X server was using neither the amdgpu driver (which it didn't have) nor the radeon driver (which doesn't support the RX 550).

The first surprise is that X worked anyways and I didn't notice anything particular wrong or off about my X session. Everything worked and was as responsive as I expected, and the OpenGL tests I ran seemed to go acceptably fast (as did a full-screen video). In retrospect there were a few oddities that I noticed as I was trying things due to my system hangs (xdriinfo reported no direct rendering and vdpauinfo spat out odd errors, for example), but there was nothing obvious (and glxinfo reported plausible things).

The second surprise is what X was actually using to drive the display, which turns out to be something called the modesetting driver. This driver is a quite basic one that relies on kernel mode setting but is otherwise more or less unaccelerated. Well, sort of, because modesetting was apparently using glamor to outsource some rendering to OpenGL, in case you have hardware accelerated OpenGL, which I think that I did in this setup. I'm left unsure of how much hardware acceleration I was getting; maybe my CPU was rendering 24-bit colour across two 1920x1200 LCDs without me noticing, or maybe a bunch of it was actually hardware accelerated even with a generic X driver.

(There is a tangled web of packages here. I believe that the open source AMD OpenGL code is part of the general Mesa packages, so it's always installed if you have Mesa present. But I don't know if the Mesa code requires the X server to have an accelerated driver, or if a kernel driver is good enough.)

PS: Kernel mode setting was available because the kernel also has an amdgpu driver module that's part of the DRM system. That module is in the general kernel-modules package, so it's installed on all machines and automatically loaded whenever the PCI IDs match.

PPS: Given that I had system lockups before I installed the X server amdgpu driver, the Fedora and freedesktop bugs are really a kernel bug in the admgpu kernel driver. Perhaps this is unsurprising and already known.

XBasicDriverPerfSurprise written at 00:54:49; Add Comment


My new Ryzen desktop is causing Linux to hang (and it's frustrating)

Normally I try to stick to a sunny tone here. Today is an unfortunate exception, since it's a problem that I'm very close to and that I have no solution for.

Last Friday, I assembled the hardware for my new office workstation, updated the BIOS, and over the weekend let it sit turned on with a scratch Fedora install and periodically doing some burnin tests, like mprime. Everything went fine. Normally I might have probably let the assembled machine sit for a while and do additional burnin tests before doing anything more, but over the weekend my current (old) workstation showed worrying signs of more flakiness, so on Monday I swapped my disks over to the new Ryzen-based hardware. Everything came up quite easily and it all looked good (and clearly faster), right up until the machine started locking up. At first I thought I had a culprit in the amdgpu kernel driver used by the new machine's Radeon RX 550 based graphics card, and I turned up a Fedora bug with a workaround. Unfortunately that doesn't appear to be a complete fix, because the machine has hung several times since then. For the latest hangs I've had netconsole enabled, and I've actually gotten output; unfortunately this has just made it more frustrating, because it is just a steady stream of 'watchdog: BUG: soft lockup - CPU#4 stuck for 23s!' reports.

(These reports are interesting in a way, because apparently the system is not so stuck that it cannot increment the timer. However, it is so stuck that it doesn't respond to the network or to the console, and it doesn't seem to notice a console Magic SysRq.)

In both sets of netconsole trances I've collected so far, I see things running through cross-CPU communication and often TLB stuff in general and specifically native_flush_tlb_others. For example:

Call Trace:
 ? __vma_rb_erase+0x1f1/0x270

This is interesting because of an old reddit post that blames this on 'Core C6 State', and there's also this bug report. On the other hand, the machine sat idle all weekend and didn't hang; in fact, it would have been more idle on the weekend than it was when it hung recently. However I'm into grasping at straws here.

(There's also this Ubuntu bug reports, which has a long discussion of tangled and complicated options to work around things, and this Fedora one. Probably there are others.)

The sensible thing to do right now is probably to swap my disks back into my old hardware until I have more time to deal with the problem (I have to get things stabilized tomorrow). But it is tempting to grasp at a number of straws:

  • swap the RX 550 card out for the very basic card in my old machine. This should completely eliminate both amdgpu and the new GPU hardware itself as a source of issues.

  • switch back to a kernel before CONFIG_RETPOLINE, because I use a number of out of tree modules and I've noticed their build process muttering about my gcc not having the needed support for this. I'm using the latest Fedora released gcc and you'd hope that that would be good enough, but I have no idea what's going on.

  • go through the BIOS to turn off 'Core C6 State' and any other fancy tuning options (and verify that it hasn't decided to silently turn on some theoretically mild and harmless automatic overclocking options). It's possible that the BIOS is deciding to do things that Linux objects to, although I don't know why it would have started to fail only once I swapped disks around. (The paranoid person wonders about UEFI versus MBR booting, but I'm not sure I'm that paranoid.)

(If I did all of this and the machine hung anyway, well, I'd be able to swap my disks back into my old desktop with no regrets.)

In the longer term, troubleshooting this and reporting any issues is probably going to be quite complicated. One of the problems is that I absolutely have to have one out of kernel module (ZFS on Linux) and I very much want another one (WireGuard). I suspect that the presence of these will cause any bug reports to be rejected more or less out of hand. In an ideal world this problem will reproduce itself on a scratch Fedora install with a stock kernel environment that's doing things like running graphics stress programs, but I'm not going to hold my breath on that. It seems quite possible that it will only happen if I'm actually using the machine, which has all sorts of problems.

(I have one complicated idea but it is complicated and rather annoying.)

The whole thing is frustrating and puzzling. We have a stable Ubuntu machine with a Ryzen 1800X and the same motherboard (but a different GPU), and this machine itself seemed fine right up until I swapped in my existing disks. Even post-swap it was perfectly fine with a six+ hour mprime -t run over last night. But if I use it it hangs sooner or later, and it now seems to be hanging even when I don't use it.

(And it appears that this motherboard doesn't have a hardware watchdog timer that's currently supported by Linux. I tried enabling the software watchdog, but it didn't trigger for literally hours and then when it did, it apparently hasn't managed to actually reboot the system, which is perhaps not too surprising under the circumstances.)

PPS: This does put a rather large crimp in my Ryzen temptation, especially if this is something systemic and widespread.

Sidebar: It's possible that I've had multiple issues

I may have hit both an amdgpu issue with Radeon RX 550s, which I've now mitigated, and some sort of issue with the BIOS putting a chunk of the machine to sleep and then Linux not being able to wake it up again. My initial hangs definitely happened while I was in front of the machine actively using it, but I believe that the hangs since I set amdgpu.dpm=0 this morning have been when I wasn't around the machine and it was at least partially idle. These are the only hangs that I have netconsole logs for, too, and they show that the machine is partially alive instead of totally hung.

RyzenMachineLinuxHangs written at 02:19:28; Add Comment


Some plans for migrating my workstation from MBR booting to UEFI

Today I finally assembled the machine that is to be my new office workstation, although I wasn't anywhere near brave enough to change over to using it on a Friday. For the most part I expect the change over to be pretty straightforward, because I'm just going to transplant all of my current disks into it (and then I'll have to make sure the network interface gets the right name, although I have a plan). However, I want to switch my Fedora install from its current MBR based booting to UEFI booting (probably with Secure Boot turned on), because it seems pretty clear that UEFI booting is the way of the future (my toe-stubbing not withstanding).

My current set of SSD root disks are partitioned with GPT and have an EFI system partition, although the partition is unused and unmounted; while I have a /boot/efi directory with some stuff in it, that's part of /boot and the root filesystem. In order to switch over to UEFI booting, I think I need to arrange for the EFI system partition to take over /boot/efi, get populated with the necessary contents (whatever they are), and then set up an UEFI boot entry or two. But the devil is in the details.

Currently I have the machine set up with a Fedora 27 install on a scratch disk, so I can run burnin tests and similar things. This helpfully gives me a live Linux that I can poke around on and save things from, although it also means that I have pre-existing UEFI boot entries that are going to become invalid the moment I remove the scratch disk.

So I think that what I want to do is something like this:

  1. Copy all of /boot/efi from the current scratch install to an accessible place, so I can look at it easily in case I need to.
  2. Make sure that I have all of the necessary and relevant EFI packages installed on my machine. Based on the scratch install's package list, I'm definitely missing some (eg grub2-efi-x64), but I'm not sure if all of them are essential (eg mactel-boot). I might as well install everything, though.

    (It appears that all files in /boot/efi on my scratch install are owned by RPMs, which is what I'd hope.)

  3. Turn off Secure Boot in the new machine's BIOS and enable MBR booting. My motto is 'one problem at a time', so I'd like to move my disks over to the new machine and sort out the inevitable problems without also having to wrangle the UEFI boot transition.

  4. Figure out the right magic way to format my existing EFI system partition in the EFI-proper way, because apparently I never did that. Naturally, Arch's wiki has good information.
  5. Mount the EFI system partition somewhere, copy all of my current /boot/efi to it, and shuffle things around so it becomes /boot/efi.
  6. Either copy my current grub.cfg to /boot/efi/EFI/fedora and edit it up or try to (re)generate a completely new one through some magic command. Probably I should start with what grub2-mkconfig produces when run from scratch and see how different it is from my current one (hopefully it can be persuaded to produce an EFI-based configuration even though my system was MBR booted).

  7. Set up a UEFI boot entry for Fedora on my EFI system partition. As I found out, this should boot shimx64.efi. I'm inclined to try to do this by modifying the existing 'Fedora' boot entry instead of deleting it and recreating it; in theory probably the only thing that needs to change is the partition GUID for the EFI system partition.

Assuming that I got everything right, at this point I should be able to boot my machine through UEFI instead of through MBR booting. I'm not sure if the motherboard's BIOS defaults to UEFI boot or if I'll have to use the boot menu, but either way I can test it. If the UEFI boot works I can turn on Secure Boot, at which point I will be UEFI-only.

I think the most likely failure point is getting a working UEFI grub.cfg. Grub2-mkconfig somehow knows whether or not you're using UEFI (and there doesn't seem to be any command line option to control this), plus my current grub.cfg is fairly different from what the Fedora 27 grub2-mkconfig generates in things like what grub2 modules get loaded. Perhaps this is a good time to figure out and write down what sort of changes I want to make to the stock grub2-mkconfig result, or perhaps I should just abandon having a custom version in general.

(I don't think my custom version is doing anything vital; it's just got a look I'm used to. And I could test a non-custom version on my current machine.)

I have two system SSDs, so I have an EFI system partition on the second SSD as well. In the long run I should set up an EFI boot environment on it as well, but that can definitely wait until I have UEFI booting working in general. I'll also need to worry about keeping at least grub.cfg in sync between the two copies. Or maybe I should just stick to having the basic EFI shell and Grub2 boot environment present, and assume that if I have to boot off the second SSD I'll need to glue things together by hand.

(My root filesystem is mirrored, but I obviously can't do that with the EFI system partition. Yes, technically I might be able to get away with it with the right choice of software RAID superblock, but no, I'm not even going to try to go there.)

MBRToUEFIBootChallenge written at 02:04:38; Add Comment


Linux's glibc monoculture is not a bad thing

Given yesterday's entry about glibc and the Linux API, it's hard to avoid concluding that glibc is basically a monoculture in Linux, with no real competition or alternative. Initially I expected to be unhappy about this since I have a reflexive dislike of monocultures, but after thinking about it more, I think that glibc's monoculture is actually not a bad thing. In fact, I'll go so far as to say that it's how open source should work.

Okay, I confess that I deliberately wrote that to be a bit over the top. So I'll phrase the same situation differently: not having other projects pointlessly trying to compete with glibc is how open source should work (but often doesn't).

If you've been around the open source world for a while, you've seen more than one project that was apparently started because people couldn't just pick one thing to focus all their effort on but had to find some small difference that 'justified' a whole second project with a whole second set of work being done, to very little overall net gain. Over and over again the open source world has N versions of substantial projects that are essentially the same. Oh, sure, there's almost always some reason, but if you take a few steps back and look at the overall pattern, it sure feels like Not Invented Here syndrome is at work at least a fair bit of the time.

(Okay, there's also politics, personalities, and sometimes licensing. Linux has never been an antiseptic and purely technical place, not that the BSDs have been either.)

Given that Linux has several graphical desktops (with libraries and a collection of programs) and religious wars over init systems, there's no obvious reason why we wouldn't also have multiple competing standard C libraries. Instead, we basically have glibc and then a number of other libcs with different goals that are explicitly not trying to be direct replacements or competitors with glibc. For once, almost all of the development effort that's going to a crucial layer of Linux is working on a single project, not being spread over a bunch of NIH competitors.

At the same time, Linux is still open to alternatives and replacements for glibc. For example, the kernel developers haven't said that glibc is the only supported standard C library or ignored backwards compatibility with user space if glibc could paper over the cracks. This is undoubtedly helped by the existence and popularity of real alternatives to glibc like musl and Bionic. If you want (or need) something significantly different to glibc, Linux will still let you get it.

(And things that are intended to be portable outside of Linux will probably accept patches to build with musl and so on.)

As long as glibc development doesn't stagnate or wind up going in bad technical directions, a monoculture around it certainly doesn't seem to be a bad thing. So on the whole I'm happy to see Linux avoiding duplicate work for once.

(At the same time I'll note that glibc is not flawless and yes, I'd like it to be better. Also, in the past there were serious issues with glibc development and some of the people involved in it; see eg here. But back then there was a serious glibc alternative and some Linux distributions actively used it. Both parts of that situation could happen again.)

GlibcMonocultureNotBad written at 23:01:25; Add Comment

Glibc and the Linux API

A while back, I wrote an entry on the overall issue of if the C runtime and standard library were a legitimate part of the Unix API. This was sparked by Go, which tries to avoid the C library on all platforms, Linux included, and where this helped lead to a fun problem. Linux is an unusual Unix in this regard. Many Unixes are tightly coupled to their standard C library, whether or not they officially documented it as the API to the system in general the way Solaris does. OpenBSD, FreeBSD, and so on all have a definitive standard C library that is part of their base operating system along side the kernel (and their standard C compiler); the kernel and this library are developed in sync by the same people. Linux doesn't have an officially blessed or co-developed standard C library in this way; instead it theoretically supports multiple ones. You're supposed to be able to build a Linux system with musl or uClibc-ng or any of the other options out there.

In practice, my understanding is that it's sufficiently hard to replace glibc that no large distribution attempts to do so (although Alpine uses musl and I've heard of it). A certain amount of software doesn't even build when compiled against non-glibc header files, and past that other libc implementations don't necessarily attempt to be exactly compatible (eg, musl's list of differences, and see also). In addition, alternate libcs generally seem to be aimed at being smaller and simpler than glibc, and as part of that they can leave out significant features like nsswitch.conf.

(As a system administrator I'm biased, but yes I consider nsswitch.conf to be a significant feature. It provides significant sysadmin control over a number of important operations.)

Or in short, while Linux officially doesn't have a 'standard C library', in practice no one is attempting to provide a full alternative or replacement to glibc. Glibc is not merely the default choice, it is the only full choice. As a result, whatever API glibc decides to supply for something is the de facto Linux standard. This is especially the case if glibc is putting a wrapper around some new Linux kernel feature. As we saw in marcan's article, there may be undocumented requirements that glibc has reverse engineered but you don't necessarily know.

(It's not clear to me how thoroughly the Linux kernel API is documented, but the man-pages project does appear to cover the actual system calls.)

Also, I believe that there are any number of significant projects that consider glibc to basically be part of the overall Linux API and GNU (libc) extensions to be just as potentially legitimate a standard as anything else. These projects are obviously dependent on glibc (or a feature for feature compatible replacement). In 2014, one such project was systemd, per Lennart Poettering's comments, and I suspect that systemd's views have not shifted since.

(As a practical matter, things like Firefox are not tested, developed, or officially supported against alternate libcs any more than they're tested with unusual C/C++ compilers. If you decide to use them that way anyway, you get to keep all of the many pieces that you're likely to wind up with.)

I have some thoughts on this overall situation, but that's going to have to be another entry since this one is already long and rambling enough.

GlibcAndLinuxAPI written at 02:57:56; Add Comment


A brief review of the Dell XPS 13 as a Fedora laptop

For years, my work laptop was a series of old second or third hand Thinkpads, I believe first a T40 and then a T61. These were relatively bulky and heavy, with short battery lifetimes (about two hours from full charge) and only a 1024x768 resolution screen, and their CPUs were both so old that they were 32-bit only machines. It's been clear for a while that 32-bit machines are on their last legs (Chrome no longer supports them, for example), so late this summer we decided to replace my laptop with a new 64-bit one. Partly based on my work from late 2016, I opted for a Dell XPS 13. I installed Fedora on it with my usual Cinnamon laptop environment.

I'll lead with the summary: I'm quite happy with my Dell XPS 13 laptop running Fedora. Everything works and using it is almost as pleasant an experience as I can imagine using a 13" ultrabook to ever be (given the limitations imposed by the physical form factor).

The physical laptop is a dramatic and drastic change; going from the heavy, bulky Thinkpad to the ultrabook Dell is a night and day shift. The Dell is much thinner and far lighter, enough so that it's quite easy to carry around relatively casually. The battery lifetime outlasts meetings without blinking and is easily good enough that I no longer worry about running out of power if I don't take the power brick with me. We got the basic FHD screen, which on the Dell's display is now enough room to fit two terminal windows side by side with more than 80 columns in each. The modern CPU, standard NVMe SSD, and 8 GB of RAM make it fast enough for sysadmin things that I don't particularly notice any big issues, even when doing things like starting Firefox (it's not instant, but it's a lot faster than the Thinkpad, which was, well, leisurely). The XPS 13's trackpad and keyboard are both okay but not exceptional; they're certainly usable, and I'd probably be better with both if I used the laptop more often. I'm still getting used to using two and three fingers for middle and right mouse button clicks and scrolling and so on, instead of having physical mouse buttons and dedicated areas of the trackpad on the Thinkpad.

(When I use the XPS 13 in its 'home' location, I connect up an ordinary USB mouse and use it by preference. If I was significantly using the laptop at multiple places, I would probably try out one of the portable Bluetooth external mice.)

Fedora (first 26 and now 27) has just worked on the Dell XPS 13. There are a few glitches, but these aren't the fault of the XPS 13 specifically as far as I can see (sometimes they're my fault). My Fedora Cinnamon setup has changed the trackpad behavior a few times in irritating ways, but that's been fixed with a visit to the settings system to re-set my desired trackpad options. All of the XPS 13's hardware appears to work under Linux, including special keyboard function keys. Certainly things like wireless and some USB Ethernet adaptors have just worked (I've wound up switching between two for reasons).

The overall result is a laptop that I can imagine taking somewhere and doing productive work on for an extended period without feeling particularly confined or restricted (although it's never going to be as nice as my desktop, with multiple screens, a nice keyboard and mouse, and so on). The screen size and resolution is one important aspect of this; I get much more productive when I can fit two real terminal windows in side by side (with readable font sizes, or almost have two 80x20 terminals arranged vertically.

(As a result of actually being willing to do reasonably serious work with my laptop, I've been discovering that Cinnamon has a bunch of handy window management key bindings. I wish they were all documented in one spot somewhere.)

I don't have any grounds for an opinion on whether the Dell XPS 13 is the best Linux ultrabook you can currently get, since I've only ever used one ultrabook (and my XPS 13 is now one generation back, since we bought it back in August). I do think it's a decently good and decently usable one. If I was buying an ultrabook for myself, I would default to getting some XPS 13 version unless I ran across a compelling reason to do otherwise.

(For more demanding usage, well, I can't imagine running virtual machines on it, but I do build Go from source every so often, more or less just because, and I have some Go stuff on the laptop that I poke at every so often. It runs Emacs and similar things fine, too, but that's not really surprising.)

PS: The one thing I wish for in the XPS 13 is more USB ports, especially USB 3.1 gen 2 USB-C; mine has two USB-A ports and one USB-C one, which isn't really enough once you start wanting to attach a mouse and a USB Ethernet adaptor and so on. I imagine that more USB-C ports will come in time, hopefully not at the expense of the current USB-A ones.

DellXPS13FedoraReview written at 01:42:57; Add Comment

Page tools: See As Normal.
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.