Wandering Thoughts

2020-11-27

Setting up self-contained Go program source that uses packages

Suppose, not entirely hypothetically, that you're writing a Go program in an environment that normally doesn't use Go. You're completely familiar with Go, with a $GOPATH and a custom Go environment and so on, so you can easily build your program. But your coworkers aren't, and you would like to give them source code that is as close to completely self-contained as possible, where they can rebuild your program with, say, 'cd /some/where; some-command' and they don't need to follow a ten-step procedure. At the same time, you'd like to use Go packages to modularize your own code so that you don't have to have everything in package main.

(You might also want to use some external packages, like golang.org/x/crypto/ssh.)

When I started thinking about this in 2018, doing this was a bit complicated. On modern versions of Go, ones with support for modules, it's gotten much simpler, at least for single programs (as opposed to a collection of them). On anything from Go 1.11 onward (I believe), what you want to do is as follows:

  • If you haven't already done so, set up a go.mod for your program and add all of the dependencies. This more or less follows Using go modules, but assumes that you already have a working program that you haven't modularized.

    go mod init cslab/ssh-validation
    go mod tidy
    

    If you don't publish your program anywhere, it's fine to give it some internal name. Otherwise you should use the official published name.

  • Vendor everything that you use:

    go mod vendor
    

  • Do modular builds using the vendored version of the packages. Not using the vendored version should work (assuming that all external packages are still there), but it will download things and clutter up your $GOPATH/pkg directory (wherever that is).

    go build -mod vendor
    

    You may want to create a Makefile that does this so that people (including you in the future) can just run 'make' instead of having to remember the extra arguments to 'go build'.

    (Since I haven't kept track of Go module support very well, I had to look up that 'go build -mod vendor' has been supported since Go 1.11, which is also the first version of Go to support modules.)

On modern versions of Go, this will automatically work right even if you have the source inside $GOPATH/src. On older versions you may need to force GO111MODULE=yes (and so you may want to put this in your Makefile). On very old versions of Go you'll have problems, because they have either no Go module support or very limited support.

Unfortunately one of those old versions of Go is what is on Ubuntu 18.04 LTS, which ships with go 1.10.4 and has never been updated. If you're in this situation, things are much more complicated. Increasingly my view is that old versions of Go without good module support are now not very usable and you're going to need to persuade people to use updated ones. The easiest way to do this is probably to set up a tree of a suitable Go version (you can use the official binaries if you want) and then change your program's Makefile to explicitly use that local copy of Go.

PS: Use of an explicit '-mod vendor' argument may not be necessary under some circumstances; see the footnote here. I've seen somewhat inconsistent results with this, though.

programming/GoSelfContainedSource written at 20:04:45; Add Comment

2020-11-26

The better way to make an Ubuntu 20.04 ISO that will boot on UEFI systems

Yesterday I wrote about how I made a 20.04 ISO that booted on UEFI systems. It was a messy process with some peculiar things that I didn't understand and places where I had to deviate from Debian's excellent documentation on Repacking a Debian ISO. In response to my entry, Thomas Schmitt (the author of xorriso) got in touch with me and very generously helped me figure out what was really going on. The short version is that I was confused and my problems were due to some underlying issues. So now I have had some learning experiences and I have a better way to do this.

First, I've learned that you don't want to extract ISO images with 7z, however tempting and easy it seems. 7z has at least two issues with ISO images; it will quietly add the El Torito boot images to the extracted tree, in a new subdirectory called '[BOOT]', and it doesn't extract symlinks (and probably not other Rock Ridge attributes). The Ubuntu 20.04.1 amd64 live server image has some symlinks, although their presence isn't essential.

The two reliable ways I know of to extract the 20.04.1 ISO image are with bsdtar (part of the libarchive-tools package in Ubuntu) and with xorriso itself. Bsdtar is easier to use but you probably don't have it installed, while you need xorriso anyway and might as well use it for this once you know how. So to unpack the ISO into our scratch tree, you want:

xorriso -osirrox on -indev example.iso -extract / SCRATCH-TREE

(See the Debian wiki for something you're going to want to do afterward to delete the tree. Substitute whatever is the correct ISO name here in place of example.iso.)

As I discovered due to my conversation with Thomas Schmitt, it can be important to re-extract the tree any time you think something funny is going on. My second issue was that my tree's boot/grub/efi.img had been quietly altered by something in a way that removed its FAT signature and made UEFI systems refuse to recognize it (I suspect some of my experimentation with mkisofs did it, but I don't know for sure).

In a re-extracted tree with a pristine boot/grub/efi.img, the tree's efi.img was valid as an El Torito EFI boot image (and the isolinux.bin is exactly what was used for the original 20.04.1 ISO's El Torito BIOS boot image). So the command to rebuild an ISO that is bootable both as UEFI and BIOS, both as a DVD image and on a USB stick, is:

xorriso -as mkisofs -r \
  -V 'Our Ubuntu 20.04 UEFI enabled' \
  -o cslab_ubuntu_20.04.iso \
  -isohybrid-mbr isohdpfx.bin \
  -J -joliet-long \
  -b isolinux/isolinux.bin -c isolinux/boot.cat \
  -boot-load-size 4 -boot-info-table -no-emul-boot \
  -eltorito-alt-boot -e boot/grub/efi.img -no-emul-boot \
  -isohybrid-gpt-basdat \
  SCRATCH-TREE

(The isohdpfx.bin file is generated following the instructions in the Debian wiki page. This entire command line is pretty much what the Debian wiki says to do.)

If xorriso doesn't complain that some symlinks can't be represented in a Joliet file name tree, you haven't extracted the 20.04.1 ISO image exactly; something has dropped the symlinks that should be there.

If you're modifying the ISO image to provide auto-installer data, you need to change both isolinux/txt.cfg and boot/grub/grub.cfg. The necessary modifications are covered in setting up a 20.04 ISO image to auto-install a server (for isolinux) and then yesterday's entry (for GRUB). You may also want to add various additional files and pieces of data to the ISO, which can be done by dropping them into the unpacked tree.

(It's also apparently possible to update the version of the installer that's in the ISO image, per here, but the make-edge-iso.sh and inject-subiquity-snap.sh scripts it points to in the subiquity repo are what I would call not trivial and so are beyond what I want to let monkey around in our ISO trees. I've already done enough damage without realizing it in my first attempts. I'll just wait for 20.04.2.)

On the whole this has been a learning experience about not questioning my assumptions and re-checking my work. I have the entire process of preparing the extracted ISO's scratch tree more or less automated, so at any time I could have deleted the existing scratch tree, re-extracted the ISO (even with 7z), and managed to build a working UEFI booting ISO with boot/grub/efi.img. But I just assumed that the tree was fine and hadn't been changed by anything, and I never questioned various oddities until later (including the '[BOOT]' subdirectory, which wasn't named like anything else on the ISO image).

linux/Ubuntu2004ISOWithUEFI-2 written at 23:39:15; Add Comment

Making an Ubuntu 20.04 ISO that will boot on UEFI systems

As part of our overall install process, for years we've used customized Ubuntu server install images (ie, ISOs, often burned on to actual DVDs) that were set up with preseed files for the Debian installer and a few other things we wanted on our servers from the start. These ISOs have been built in the traditional way with mkisofs and so booted with isolinux. This was fine for a long time because pretty much all of our servers used traditional MBR BIOS booting, which is what ISOs use isolinux for. However, for or reasons outside the scope of this entry, today we wanted to make our 20.04 ISO image also boot on systems using UEFI boot. This turned out to be more complicated than I expected.

(For basic background on this, see my earlier entry on setting up a 20.04 ISO image to auto-install a server.)

First, as my co-workers had already discovered long ago, Linux ISOs do UEFI booting using GRUB2, not isolinux, which means that you need to customize the grub.cfg file in order to add the special command line parameters to tell the installer about your 20.04 installer data. We provide the installer data in the ISO image, which means that our kernel command line arguments contain a ';'. In GRUB2, I discovered that this must be quoted:

menuentry "..." {
  [...]
  linux /casper/vmlinuz quiet "ds=nocloud;s=/cdrom/cslab/inst/" ---
  [...]
}

(I advise you to modify the title of the menu entries in the ISO's grub.cfg so that you know it's using your modified version. It's a useful reassurance.)

If you don't do this quoting, all the kernel (and the installer) see is a 'ds=nocloud' argument. Your installer data will be ignored (despite being on the ISO image) and you may get confused about what's wrong.

The way ISOs are made bootable is that they have at least one El Torito boot section (see also the OsDev Wiki). A conventional BIOS bootable ISO has one section; one that can also be booted through UEFI has a second one that is more intricate. You can examine various information about El Torito boot sections with dumpet, which is in the standard Ubuntu repositories.

In theory I believe mkisofs can be used to add a suitable extra ET boot section. In practice, everyone has switched to building ISO images with xorriso, for good reason. The easiest to follow guide on using xorriso for this is the Debian Wiki page on Repacking a Debian ISO, which not only has plenty of examples but goes the extra distance to explain what the many xorriso arguments mean and do (and why they matter). This is extremely useful since xorriso has a large and complicated manpage and other documentation.

Important update: The details of much of the rest of this entry turns out to not be right, because I had a corrupted ISO tree with altered files. For a better procedure and more details, see The better way to make an Ubuntu 20.04 ISO that will boot on UEFI systems. The broad overview of UEFI requiring a GRUB2 EFI image is accurate, though.

However, Ubuntu has a surprise for us (of course). UEFI bootable Linux ISOs need a GRUB2 EFI image that is embedded into the ISO. Many examples, including the Debian wiki page, get this image from a file in the ISO image called boot/grub/efi.img. The Ubuntu 20.04.1 ISO image has such a file, but it is not actually the correct file to use. If you build an ISO using this efi.img as the El Torito EFI boot image, it will fail on at least some UEFI systems. The file you actually want to use turns out to be '[BOOT]/2-Boot-NoEmul.img' in the ISO image.

(Although the 20.04.1 ISO image's isolinux/isolinux.bin works fine as the El Torito BIOS boot image, it also appears to not be what the original 20.04.1 ISO was built with. The authentic thing seems to be '[BOOT]/1-Boot-NoEmul.img'. I'm just thankful that Ubuntu put both in the ISO image, even if it sort of hid them.)

Update: These '[BOOT]' files aren't in the normal ISO image itself, but are added by 7z (likely from the El Torito boot sections) when it extracts the ISO image into a directory tree for me. The isolinux.bin difference is from a boot info table that contains the block offsets of isolinux.bin in the ISO. The efi.img differences are currently more mysterious.

The resulting xorriso command line I'm using right now is more or less:

xorriso -as mkisofs -r \
  -V 'Our Ubuntu 20.04 UEFI enabled' \
  -o cslab_ubuntu_20.04.iso \
  -isohybrid-mbr isohdpfx.bin \
  -J -joliet-long \
  -b isolinux/isolinux.bin -c isolinux/boot.cat \
  -boot-load-size 4 -boot-info-table -no-emul-boot \
  -eltorito-alt-boot -e '[BOOT]/2-Boot-NoEmul.img' -no-emul-boot \
  -isohybrid-gpt-basdat \
  SCRATCH-DIRECTORY

(assuming that SCRATCH-DIRECTORY is your unpacked and modified version of the 20.04.1 ISO image, and isohdpfx.bin is generated following the instructions in the Debian wiki page.)

The ISO created through this definitely boots in VMWare in both UEFI and BIOS mode (and installs afterward). I haven't tried it in UEFI mode on real hardware yet and probably won't for a while.

PS: If you use the Debian wiki's suggested xorriso command line to analyze the 20.04.1 ISO image, it will claim that the El Torito EFI boot image is 'boot/grub/efi.img'. This is definitely not the case, which you can verify by using dumpet to extract both of the actual boot images from the ISO and then cmp to see what they match up with.

linux/Ubuntu2004ISOWithUEFI written at 00:56:13; Add Comment

2020-11-25

Firefox's WebRender has mixed results for me on Linux

I wrote last week about how WebRender introduced bad jank in my Linux Firefox under some circumstances. However, it turns out that WebRender for me has mixed results even outside of that issue, as I reported on Twitter:

[...] In the bad news, the WebRender Firefox is clearly less responsive on CSS hovers on golangnews.com than the regular one.

(The specific issue I see is that if I wave the mouse up and down the page, the hover highlight can visibly lag behind the mouse position a bit. With WebRender off, this doesn't happen. The laggy performance shows up clearly in the Performance recordings in Web Developer tools, where I can see clear periods of very low FPS numbers and the overall average FPS is unimpressive.)

This is on my home machine, which has integrated Intel graphics (on a decent CPU) and a HiDPI screen. Today I was in the office and so using my office machine, which uses a Radeon RX 550 graphics card (because it's an AMD machine and good AMD CPUs don't have onboard GPUs) and dual non-HiDPI screens, and in very light testing my Firefox was using WebRender and didn't seem as clearly laggy on CSS hovers on golangnews.com as my home machine.

(This isn't quite a fair test because my office machine isn't running quite as recent a build of Nightly as my home machine is.)

At one level, this is unsurprising. On Linux, WebRender has long had block and allow lists that depended both on what sort of graphics you had and what screen resolution you were running at (this was in fact one of the confusing bits of WebRender on Linux, since Firefox didn't make it clear what about your setup was allowing or stopping WebRender). Presumably Mozilla has good reason for these lists, in that how well WebRender performed likely varies from environment to environment, or more exactly from some combination of GPU and resolution to other combinations.

At another level, this is disappointing. Firefox's WebRender is supposed to be a great performance improvement, delivering smooth 60 FPS animation (presumably including CSS effects), but in practice some combination of Firefox WebRender, the Linux X11 graphics stack, and my specific hardware results in clearly worse results than the old way. All of that effort on everyone's part has delivered an outcome that makes me turn off WebRender and plan to ignore it until I have no other choice. This is especially personally disappointing because WebRender is a necessary enabler for things like hardware accelerated video playback.

(I have to confess that I've held my nose and turned to Chrome for the single job of displaying a couple of sites where I really care about smooth video performance. I use Chrome Incognito windows for this, which at least limits some of the damage. I still hold my views on walking away from Chrome, but I'm a pragmatist.)

web/FirefoxWebRenderMixed written at 00:18:15; Add Comment

2020-11-23

What containers do and don't help you with

In a comment on my entry on when to use upstream versions of software, Albert suggested that containers can be used to solve the problems of using upstream versions and when you have to do this anyway:

A lot of those issues become non-issues if you run the apps in containers (for example Grafana).

Unfortunately this is not the case, because of what containers do and don't help you with.

What containers do is that they isolate the host and the container from each other and make the connection between them simple, legible, and generic. The practical Unix API is very big and allows software to become quite entangled in the operating system and therefor dependent on specific things in unclear ways. Containers turn this into a narrow interface between the software and the host OS and make it explicit (a container has to say clearly at least part of what it wants from the host, such as what ports it wants connected). Containers have also created a social agreement that if you violate the container API, what happens next is your own fault. For example, there is usually nothing stopping you from trying to store persistent data within your theoretically ephemeral container, but if you do it and your container is restarted and you lose all the data, you get blamed, not the host operators.

However, containers do not isolate software from itself and from its own flaws and issues. When you put software in a container, you still have to worry about choosing and building the right version of the software, keeping it secure and bug free, and whether or not to update it (and when). Putting Exim 4.93 in a container doesn't make it any better to use than if you didn't have it in a container. Putting Grafana or Prometheus Pushgateway in a container doesn't make it any easier to manage their upgrades, at least by itself. It can be that the difficulties of doing some things in a container setup drive you to solve problems in a different way, but putting software in a container doesn't generally give it any new features so you could always have solved your problems in those different ways. Containers just gave you a push to change your other practices (or forced you to).

Containers do make it easier to deal with software in one respect, which is that they make it easier to select and change where you get software from. If someone, somewhere, is doing a good job of curating the software, you can probably take advantage of their work. Of course this is just adding a level of indirection; instead of figuring out what version of the software you want to use (and then keeping track of it), you have to figure out which curator you want to follow and keep up with whether they're doing a good job. The more curators and sources you use, the more work this will be.

(Containers also make it easier and less obvious to neglect or outright abandon software while still leaving it running. Partly this is because containers are deliberately opaque to limit the API and to create isolation. This does not magically cure the problems of doing so, it just sweeps them under the rug until things really explode.)

tech/ContainersWhatHelpAndNot written at 22:44:05; Add Comment

My views on when you should use the official upstream versions of software

Yesterday I wrote about how sometimes it's best to use the upstream versions, with the story of Prometheus here as the example for why you can be pushed into this despite what I've said about the problems inherent in this. But I didn't write anything about when you should do this versus when you should stick with whatever someone else is providing for you (usually your operating system distribution). There's no completely definite answer, partly because everyone's situation is a bit different, but I have accumulated some views here.

In general, what we really care about is not where the software comes from but how well curated what you're getting is, because curating software is work and requires expertise. Usually the best source of curation is packages provided by your OS, which typically add an extra layer of quality assurance over the upstream releases (or over people who put together low-curation OS specific packages from upstream releases). OS packages also come with automatic updating, or at a minimum central notification of updates being available so that you don't have to hunt down odd ways of keeping informed about updates.

The obvious reason to use the upstream version (building it yourself if necessary) is when there's no other option because, for example, you use Ubuntu and there are no official packages of it. Whether you want to do this depends on how much you need the package, how easy it is to build and operate, and how likely it is to have problems. We do this for some of the Prometheus exporters we use, but they have the advantage of being simple to build (Go programs usually make this easy), simple to operate, and extremely unlikely to have problems. They also aren't critical components, so if we had to drop one because of problems it wouldn't be a big deal. We also do this for Grafana, because we absolutely have to have Grafana and there is no Ubuntu package for it, so our best option left is the upstream binary releases.

If your OS provides packages but the packages are outdated, it's not necessarily a reason to switch (especially if you have to build it yourself). Often outdated versions of packages still work fine; our Ubuntu systems run a lot of outdated versions of things like Exim, Dovecot, and Apache, because the Ubuntu versions are always behind the official releases. What drove us to switch with Prometheus was that the Ubuntu versions being outdated actively mattered. They weren't just outdated, they were buggy and limited.

(Sometimes sticking with OS packages will lead you to skip entire OS releases, because one release has an okay outdated version but a newer one has a broken outdated version. But this can be perfectly okay, as it is for us in the case of Exim and Ubuntu 20.04. But if Ubuntu 22.04 also turns out to have a version of Exim that we don't want to use, we'll have to change course and use an upstream version.)

A related reason is if the upstream strongly recommends against using the OS packages. This is the case with rspamd, where the official site specifically urges you not to use the Debian and Ubuntu packages. Like Prometheus, rspamd provides its own pre-built binaries that are officially supported, so we use those rather than take whatever risks are there with the Ubuntu version. Spam filtering is also one of those fields where the software needs to keep up with the ever changing Internet (spam) landscape in order to be as effective as possible.

(Of course now that I've looked I've discovered that there isn't even an rspamd package for Ubuntu 18.04. But we made the decision based on that being what the upstream strongly recommended, and we're going to stick with it even for Ubuntu releases where Ubuntu does provide an official rspamd package.)

Once you start using an upstream version you have to decide how often to update it. My views here depend on how frequently the upstream does releases, how rapidly they evolve the program, and generally how much trouble you're going to have with catching up later with a whole bunch of changes at once (and how much the upstream believes in backward compatibility). A project with frequent and regular releases, a significant churn in features and options, and a low commitment to long term backward compatibility is one where you really want to keep up. Otherwise you can consider freezing your version, especially if you have to build and update things manually.

sysadmin/UseUpstreamWhenViews written at 00:34:45; Add Comment

2020-11-22

Sometimes it's best to use the official upstream versions of software

In yesterday's entry I mentioned that I keep track of the official Prometheus releases. In a comment, Sean Conner asked how this goes along with my views on the problems inherent in building your own copies of software packages, most of which are about using your own versions, not just the specifics of compiling them. There is a story there, but the short version is that sometimes it's better to use the official upstream versions of software packages (and often to keep up to date on them). Our experience with Prometheus in our setup is a good example of this.

I described the overall situation in detail in an entry on my views on regularly upgrading Prometheus and Grafana. The short version is that we initially switched from the Ubuntu packaged versions of Prometheus components to the official project builds because the Ubuntu versions were outdated (even six months or so after 18.04's release), then we kept updating because it's basically what both projects require you to do. An especially good example of this comes from Pushgateway, where the format for its optional storage changed in a very narrow transition window; v0.10.0 was the only release that read the old format and wrote the new format, and the immediately following v1.0.0 removed support for the old format. Failing to keep up with Pushgateway releases could have given us an unpleasant surprise.

(Essentially you were intended to start v0.10.0 more or less once, to migrate the storage format, then switch to v1.0.0 or later.)

There are a number of things that make this less alarming than it looks, somewhat mitigating the problems I pointed out. First, both Prometheus and Grafana actually do provide official binary builds, and in fact it's the default way to use each project. That it's the default way means that each project has a relatively strong motivation to make good releases (and fix problems promptly), especially when combined with the development pace. When a new Grafana or Prometheus release comes out, each project knows that a lot of people will be updating to their provided binaries and they cannot sort of wash their hands of any problems that show up.

One of the reasons that Prometheus and Grafana provide their binaries and the binaries work well is that both are Go based projects and have essentially no build options. This makes it easy to produce universal binaries that will run on basically any Linux distribution (for example). More traditional projects have build options and use distribution dependent shared libraries and so on, so they couldn't produce such universal binaries without a lot more work. So they don't, and then if you're using the official releases you have to at least navigate through the build options to the set you want (and deal with the project's choice of build system).

This still leaves you to pick good releases as opposed to ones with problems, but this is mitigated by the fast pace of releases and made somewhat moot by the need to keep up with releases. With frequent new releases (and very fast bug fix ones), any serious new issues are likely to be fixed fast (and if they aren't, the odds are that the project considers them a feature and won't ever change them). This is unlike traditional open source programs, where a non-bugfix release every six months is often considered fast and even bugfix releases can take some time to be made.

sysadmin/UseUpstreamSometimes written at 00:38:38; Add Comment

2020-11-21

Github based projects have RSS syndication feeds for their releases

Today I discovered that Prometheus had made two bug-fixing point releases without sending email to their regular announcement list, which meant that we were still running 2.22.0 instead of the current 2.22.2. The bug fixes in 2.22.1 and 2.22.2 fortunately don't look too important to us, but it's still a bit disconcerting to discover we're out of date.

As it happens, if I want to I can arrange to never be surprised this way again. The Prometheus public repository is hosted on Github and Github provides an automatic 'RSS' syndication feeds for the release pages for all projects (it's actually in Atom syndication format, but most people don't care about that). This means that the moment something is tagged and given release notes, it will show up in the feed and then in my feed reader. So if I wanted to never be surprised by Prometheus, I could subscribe to this.

(If you have a Github account, you can also get this information in email.)

This is also a convenient way to track projects that don't have any convenient normal source of information for releases, like Grafana. Grafana doesn't announce all releases through email or any other readily tracked source, but they do have a Github repository and so for a fair while now I've subscribed to its release page in my syndication feed reader. It's been very handy and has definitely reduced the annoyance level of the whole situation.

(Big periodic Grafana releases are announced on their blog, but they don't announce point releases like v7.3.3 there. There is a Discourse topic that does get the release notes for new point releases (eg), but you have to keep checking the specific topic for the latest release to see point releases.)

However, my Grafana release syndication feed has also shown me the potential hazards of completely trusting it for release information. Grafana appears to sometimes expose the tag before having fully released everything, with the artifacts available for download, release notes published, and so on. So I've learned to hold off a bit and to check Grafana's other information sources. Just because a tag and some build artifacts have appeared on Github doesn't mean that I want to immediately grab them (for anything).

PS: I'm not sure where and how I found this out, but it was back in February of this year and I think someone told me.

PPS: Github also has feeds for other things, and I suspect that other 'forges' like Gitlab also have similar feeds but I haven't checked.

sysadmin/GithubReleasesFeeds written at 01:24:16; Add Comment

2020-11-19

Firefox on Linux has not worked well with WebRender for me so far

A while back I wrote about my confusion over Firefox's hardware accelerated video on Linux; as part of that confusion, I attempted to turn on all of the preferences and options necessary to make hardware accelerated video work. Part of the requirements is (or was) forcing on WebRender (also), which is in large part about having the GPU do a lot more web page rendering than it does in Firefox today. Even after I seemed to not get hardware accelerated video, I left WebRender turned on in the Firefox instance I was using for this. Well, at least for a while. After a while I noticed that that Firefox instance seemed more prone to jank than it had been before when I did things like flip back to a Firefox window I hadn't been using for a while. Reverting WebRender back to the default setting of being off fixed the problem.

(I probably turned this off around the time of Firefox 81.)

Very recently, my personal build of Firefox Nightly started experiencing weird jank. Most of my Firefox windows are on my first (primary) fvwm virtual page and performed normally, but the moment I flipped to another virtual page (often to substitute for not having dual monitors right now) the Firefox window (or windows) that was on that new page would get extremely slow to respond. Today I determined that this was due to WebRender recently getting turned on by default in my Nightly environment; forcing WebRender off via gfx.webrender.force-disabled eliminated the problem. I cross-checked about:support output between my old normal Nightly build and the new Nightly build while it had the jank problem and verified that the only difference was WebRender (and hardware rendering) being turned on.

(This change is so recent it's not on the WebRender status page, which still says that WebRender is not enabled on large screens like mine on Intel GPUs. The change is bugzilla #1675768.)

Unfortunately this is not a simple problem. It's not an issue of excessive CPU or GPU usage, as far as I can tell, and it's not caused simply by having a Firefox window in an additional fvwm virtual page, because it doesn't happen in a test Firefox profile that's running the same binary. That it happens only if I move virtual pages makes it rather odd, because fvwm actually implements changing between virtual pages by moving all of the windows that aren't supposed to be there to off screen (and moving all of the other ones back on). So a Firefox window on the screen sees no difference in X11 protocol level window coordinates regardless of what virtual page fvwm is displaying (although offscreen Firefox windows will have different coordinates).

(I've also tried running Firefox with the environment variable 'INTEL_DEBUG=perf', from here, and there's no smoking gun. However, the change's bug mentions 'vsync' every so often and as far as I can see there's no way I can check for excessive waits for vsync, which could be one source of stalls.)

PS: Because I use fvwm, bug #1479135 - Black border around popups with non-compositing window manager usually makes it pretty obvious if WebRender is on in one of my Firefox instances (I use uBlock Origin's on the fly element picker a lot, which requires calling up its addon menu, which shows these black borders).

web/FirefoxWebRenderFailure written at 22:59:02; Add Comment

Apple Silicon Macs versus ARM PCs

In a comment on my entry on how I don't expect to have an ARM-based PC any time soon, Jonathan said:

My big takeaway from the latest release of Apple laptops is that these new laptops aren't necessarily ARM laptops. [...]

When a person gets an Apple Silicon Mac, they are not getting an ARM computer. They are getting an Apple computer.

As it happens, I mostly agree with this view of the new Apple machines (and it got some good responses on tildes.net). These Apple Silicon Macs are ARM PCs in that they are general purpose computers (as much as any other OS X macOS machine) and that they use the ARM instruction set. But they are not 'ARM PCs' in two other ways. First, they're not machines that will run any OS you want or even very many OSes. The odds are pretty good that they're not going to be running anything other than OS X macOS any time soon (see Matthew Garrett).

Part of that is because these machines use a custom set of hardware around their ARM CPU and Apple has no particular reason to document that hardware so that anyone else can talk to it. In the x86 PC world, hardware and BIOS documentation exists (to the extent that it does) and standards exist (to the extent that they do) because there are a bunch of independent parties all involved in putting machines together, so they need to talk to each other and work with each other. There is nothing like that in Apple Silicon Macs; Apple is fully integrated from close to the ground up. The only reason Apple has for using standards is if they make Apple's life easier.

(Thus, I suspect that there is PCIe somewhere in those Macs.)

Second, they don't use standard hardware components and interfaces. This isn't just an issue of being able to change pieces out when they break or when they don't fit your needs (or when you want to improve the machine without replacing it entirely). It also means that work to support Apple Silicon Macs doesn't help any other hypothetical ARM PC, and vice versa. To really have 'ARM PCs' in the way that there are 'x86 PCs', you need standards, and to get those standards you probably need component based systems. If everyone is making bespoke SoC machines, you have to pray that they find higher level standards compelling, and those standards are useful enough.

(Even laptop x86 PCs are strongly component based, although often those components are soldered in place. This is one reason why Linux and other free OSes often mostly or entirely just works on new laptop models.)

PS: My feeling is that there is no single 'desktop market' where we can say that it does or doesn't want machines with components that it can swap or mix and match. There is certainly a market segment that demands that, and a larger one that wants at least the lesser version of adding RAM, replacing the GPU, and swapping and adding disks. But there is also a corporate desktop market where they buy little boxes in a specific configuration and never open them, and I suspect it has a bigger dollar volume.

tech/ApplePCVsARMPC written at 01:21:38; Add Comment

(Previous 10 or go back to November 2020 at 2020/11/17)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.