Wandering Thoughts archives

2017-10-11

Understanding M.2 SSDs in practice and how this relates to NVMe

Modern desktop motherboards feature 'M.2' sockets (or slots) for storage in addition to some number of traditional SATA ports (and not always as many of them as I'd like). This has been confusing me lately as I plan a future PC out, so today I sat down and did some reading to understand the situation both in theory and in practice.

M.2 is more or less a connector (and mounting) standard. Like USB-C ports with their alternate mode, M.2 sockets can potentially support multiple protocols (well, bus interfaces) over the same connector. M.2 sockets are keyed to mark different varieties of what they support. Modern motherboards using M.2 sockets for storage are going to be M.2 with M keying, which potentially supports SATA and PCIe x4 (and SMBus).

M.2 SSDs use either SATA or NVMe as their interface, and generally are M keyed these days. My impression is that M.2 NVMe SSDs cost a bit more than similar M.2 SATA SSDs, but can perform better (perhaps much better). A M.2 SATA SSD requires a M.2 socket that supports SATA; a M.2 NVMe SSD requires a PCIe (x4) capable M.2 socket. Actual motherboards with M.2 sockets don't necessarily support both SATA and PCIe x4 on all of their M.2 sockets. In particular, it seems common to have one M.2 socket that supports both and then a second M.2 socket that only supports PCIe x4, not SATA.

(The corollary is that if you want to have two more or less identical M.2 SSDs in the same motherboard, you generally need them to be M.2 NVMe SSDs. You can probably find a motherboard that has two M.2 sockets that support SATA, but you're going to have a more limited selection.)

On current desktop motherboards, it seems to be very common to not be able to use all of the board's M.2 sockets, SATA ports, and PCIe card slots at once. One or more M.2 SATA ports often overlap with normal SATA ports, while one or more M.2 PCIe x4 can overlap with either normal SATA ports or with a PCIe card slot. The specific overlaps vary between motherboards and don't seem to be purely determined by the Intel or AMD chipset and CPU being used.

(Some of the limitations do seem to be due to the CPU and the chipset, because of a limited supply of PCIe lanes. I don't know enough about PCIe lane topology to fully understand this, although I know more than I did when I started writing this entry.)

The ASUS PRIME Z370-A is typical of what I'm seeing in current desktop motherboards. It has two M.2 sockets, the first with both SATA and PCIE x4 and the second with just PCIe x4. The first socket's SATA steals the regular 'SATA 1' port, but its PCIe x4 is unshared and doesn't conflict with anything. The second socket steals two SATA ports (5 and 6) in PCIe x4 mode but can also run in PCIe x2 mode, which leaves those SATA ports active. So if you only want one NVMe SSD, you get 6 SATA port; if you want one M.2 SATA SSD, you're left with 5 SATA ports; and if you want two M.2 NVMe SSDs (at full speed), you're left with 4 regular SATA ports. The whole thing is rather confusing, and also tedious if you're trying to work out whether a particular current or future disk configuration is possible.

Since I use disks in mirrored pairs, I'm only interested in motherboards with two or more M.2 PCIe (x4) capable sockets. M.2 SATA support is mostly irrelevant to me; if I upgrade from my current SSD (and HD) setup to M.2 drives, it will be to NVMe SSDs.

(You can also get PCIe cards that are PCIe to some number of M.2 PCIe sockets, but obviously they're going to need to go into one of the PCIe slots with lots of PCIe lanes.)

tech/M2SSDsAndNVMe written at 02:17:13; Add Comment

2017-10-10

An interesting way to leak memory with Go slices

Today I was watching Prashant Varanasi's Go release party talk Analyzing production using Flamegraphs, and starting at around 28 minutes in the talk he covered an interesting and tricky memory leak involving slices. For my own edification, I'm going to write down a version of the memory leak here and describe why it happens.

To start with, the rule of memory leaks in garbage collected languages like Go is that you leak memory by retaining unexpected references to things. The garbage collector will find and free things for you, but only if they're actually unused. If you're retaining a reference to them, they stick around. Sometimes this is ultimately straightforward (perhaps you're deliberately holding on to a small structure but not realizing that it has a reference to a bigger one), but sometimes the retention is hiding in the runtime implementation of something. Which brings us around to slices.

Simplified, the code Prashant was dealing with was maintaining a collection of currently used items in a slice. When an item stopped being used, it was rotated to the end of the slice and then the slice was shrunk by truncating it (maintaining the invariant that the slice only had used items in it). However, shrinking a slice doesn't shrink its backing array; in Go terms, it reduces the length of a slice but not its capacity. With the underlying backing array untouched, that array retained a reference to the theoretically discarded item and all other objects that the item referenced. With a reference retained, even one invisible to the code, the Go garbage collector correctly saw the item as still used. This resulted in a memory leak, where items that the code thought had been discarded weren't actually freed up.

Now that I've looked at the Go runtime and compiler code and thought about the issue a bit, I've come to the obvious realization that this is a generic issue with any slice truncation. Go never attempts to shrink the backing array of a slice, and it's probably impossible to do so in general since a backing array can be shared by multiple slices or otherwise referenced. This obviously strongly affects slices that refer to things that contain pointers, but it may also matter for slices of plain old data things, especially if they're comparatively big (perhaps you have a slice of Points, with three floats per point).

For slices containing pointers or structures with pointers, the obvious fix (which is the fix adopted in Uber's code) is to nil out the trailing pointer(s) before you truncate the slice. This retains the backing array at full size but discards references to other memory, and it's the other memory where things really leak.

For slices where the actual backing array may consume substantial memory, I can think of two things to do here, one specific and one generic. The specific thing is to detect the case of 'truncation to zero size' in your code and specifically null out the slice itself, instead of just truncating it with a standard slice truncation. The generic thing is to explicitly force a slice copy instead of a mere truncation (as covered in my entry on slice mutability). The drawback here is that you're forcing a copy, which might be much more expensive. You could optimize this by only forcing a copy in situations where the slice capacity is well beyond the new slice's length.

Sidebar: Three-index slice truncation considered dangerous (to garbage collection)

Go slice expressions allow a rarely used third index to set the capacity of the new slice in addition to the starting and ending points. You might thus be tempted to use this form to limit the slice as a way of avoiding this garbage collection issue:

slc = slc[:newlen:newlen]

Unfortunately this doesn't do what you want it to and is actively counterproductive. Setting the new slice's capacity doesn't change the underlying backing array in any way or cause Go to allocate a new one, but it does mean that you lose access to information about the array's size (which would otherwise be accessible through the slice's capacity). The one effect it does have is forcing a subsequent append() to reallocate a new backing array.

programming/GoSlicesMemoryLeak written at 00:26:09; Add Comment

2017-10-09

JavaScript as the extension language of the future

One of the things I've been noticing as I vaguely and casually keep up with technology news is that JavaScript seems to be showing up more and more as an extension language, especially in heavy-usage environments. The most recent example is Cloudflare Workers, but there are plenty of other places that support it, such as AWS Lambda. One of the reasons for picking JavaScript here is adequately summarized by Cloudflare:

After looking at many possibilities, we settled on the most ubiquitous language on the web today: JavaScript.

JavaScript is ubiquitous not just in the browser but beyond it. Node.js is popular on the server, Google's V8 JavaScript engine is apparently reasonably easy to embed into other programs, and then there's at least Electron as an environment to build client-side applications on (and if you build your application in JavaScript, you might as well allow people to write plugins in JavaScript). But ubiquity isn't JavaScript's only virtue here; another is that it's generally pretty fast and a lot of people are putting a lot of money into keeping it that way and speeding it up.

(LuaJIT might be pretty fast as well, but Lua(JIT) lacks the ecology of JavaScript, such as NPM, and apparently there are community concerns.)

This momentum in JavaScript's favour seems pretty strong to me as an outside observer, especially since its use in browsers insures an ongoing supply of people who know how to write JavaScript (and who probably would prefer not to have to learn another language). JavaScript likely isn't the simplest option as an extension language (either to embed or to use), but if you want a powerful, fast language and can deal with embedding V8, you get a lot from using it. There are also alternatives to V8, although I don't know if any of them are particularly small or simple.

(The Gnome people have Gjs, for example, which is now being used as an implementation language for various important Gnome components. As part of that, you write Gnome Shell plugins using JavaScript.)

Will JavaScript start becoming common in smaller scale situations, where today you'd use Lua or Python or the like? Certainly the people who have to write things in the extension language would probably prefer it; for many of them, it's one fewer language to learn. The people maintaining the programs might not want to embed V8 or a similar full-powered engine, but there are probably lighter weight alternatives (there's at least one for Go, for example). These may not support full modern JavaScript, though, which may irritate the users of them (who now have to keep track of who supports what theoretically modern feature).

PS: Another use of JavaScript as an 'extension' language is various NoSQL databases that are sometimes queried by sending them JavaScript code to run instead of SQL statements. That databases are picking JavaScript for this suggests that more and more it's being seen as a kind of universal default language. If you don't have a strong reason to pick another language, go with JavaScript as the default. This is at least convenient for users, and so we may yet get a standard by something between default and acclamation.

programming/JavaScriptExtensionLanguage written at 01:46:27; Add Comment

2017-10-08

Thinking about whether I'll upgrade my next PC partway through its life

There are some people who routinely upgrade PC components through the life of their (desktop) machine, changing out CPUs, memory, graphics cards, and close to every component (sometimes even the motherboard). I've never been one of those people; for various reasons my PCs have been essentially static once I bought them. I'm in the process of planning a new home and work PC, and this time around I'm considering deliberately planning for some degree of a midway upgrade. Given that I seem to keep PCs for at least five years, that would be two or three years from now.

(Part of my reason for not considering substantial upgrades was I hadn't assembled PCs myself, so thoughts of things like replacing my CPU with a better one were somewhat scary.)

Planning for a significant midway upgrade in advance is always a bit daring and uncertain, since you never know what companies like Intel are going to do in the future with things like new CPU sockets and so on. Despite that I think you can at least consider some things, and thus perhaps build up your initial PC with some eye towards the future changes you're likely to want to make. Well, let me rephrase that; I can think about these things, and I probably should.

However, for me the big change would be a change in mindset. My PC would no longer be something that I considered immutable and set in stone, with me just having whatever I had. Merely deliberately deciding that I'll have this mindset probably makes it more likely that I'll actually carry through and do some upgrades, whatever they turn out to be.

In terms of actual upgrades, the obvious midway change is an increase in RAM and I can make that easier by picking a DIMM layout that only populates two out of four motherboard DIMM slots. These days it's easy to get 32 GB with two 16 GB DIMM modules, and that's probably enough for now (and RAM is still surprisingly expensive, unfortunately).

Planning a midway CPU upgrade is more chancy because who knows if a compatible CPU will still be available at a reasonable price in a few years. Probably I'd have to actively keep up with CPU and socket developments, so that when my PC's CPU socket stops being supported I can wait for the last compatible CPU to hit a suitably discounted price and then get one. If this happens too soon, well, I get to abandon that idea. It's also possible that CPUs won't progress much in the next five years, although I'm hoping that we get more cores at least.

Graphics card upgrades are so common I'm not sure that people think of them as 'upgrading your PC', but they're mostly irrelevant for me (as someone who runs Linux and sticks to open source drivers). However I do sometimes use a program that could use a good GPU if I had one, and someday there may be real open source drivers for a suitable GPU card from either AMD or nVidia (I'm aware that I'm mostly dreaming). This will be an easy drop-in upgrade, as I plan to use Intel motherboard graphics to start with.

(Hard drives are a sufficiently complicated issue that I don't think they fit into the margins of this entry. Anyway, I need to do some research on the current state of things in terms of NVMe and similar things.)

tech/MidwayPCUpgradeThoughts written at 02:26:06; Add Comment

2017-10-07

I'm trying out smooth scrolling in my browsers

When I started out using web browsers, there was no such thing as browser smooth scrolling; graphics performance was sufficiently poor that the idea would have been absurd. When you tapped the spacebar, the browser page jumped, and that was that. When graphics sped up and browsers started taking advantage of it, not only was I very used to my jump scrolling but I was running on Linux (and on old hardware), so the smooth scrolling I got did not exactly feel very nice to me. The upshot was that I immediately turned it off in Firefox, and ever since then I've carried that forward (and explicitly turned it off in Chrome as well, and so on).

I've recently reversed that, switching over to letting my browsers use smooth scrolling. Although my reversal here was sudden, it's the culmination of a number of things. The first is that I made some font setup changes that produced a cascade of appearance changes in my browsers, so I was fiddling with my browser setup anyway. The font change itself was part of reconsidering long term habits that maybe weren't actually the right choice, and on top of that I read Pavel Fatin's Scrolling with pleasure, especially this bit:

What is smooth scrolling good for — isn’t it just “bells and whistles”? Nothing of the kind! Human visual system is adapted to deal with real-world objects, which are moving continuously. Abrupt image transitions place a burden on the visual system, and increase user’s cognitive load (because it takes additional time and effort to consciously match the before and after pictures).

A number of studies have demonstrated measurable benefits from smooth scrolling, [...]

On the one hand, I didn't feel like I had additional cognitive load because of my jump scrolling; if anything, it felt like jump scroll was easier than smooth scroll. On the other hand, people (myself included) are just terrible at introspection; my feelings could be completely wrong (and probably were), especially since I was so acclimatized to jump scrolling and smooth scrolling was new and strange.

Finally, in practice I've been doing 'tap the spacebar' whole page scrolling less and less for some time. Increasingly I scroll only a little bit at a time anyway, using a scroll wheel or a gesture. That made the change to smooth scrolling less important and also suggested to me that maybe there was something to the idea of a more continuous, less jumpy scroll, since I seemed to prefer something like it.

At this point I've been using browser smooth scrolling for more than a month. I'm not sure if it's a huge change, and it certainly doesn't feel as big of a change as my new fonts. In some quick experiments, it's clear that web pages scroll slower with smooth scrolling turned on, but at the same time that's also clearly deliberate; jump scroll is basically instant, while smooth scrolling has to use some time to actually be smooth. Switching to jump scrolling for the test felt disorienting and made it hard to keep track of where things were on the page, so at the least I've become used to how to work with smooth scrolling and I've fallen out of practice with jump scrolling on web pages.

On the whole I don't regret the change so far and I can even believe that it's quietly good for me. I expect that I'll stick with it.

(I admit that one reason I was willing to make the switch was my expectation that sooner or later both Firefox and Chrome were just going to take away the option of jump scrolling. Even if I wind up in the same place in the end, I'd rather jump early than be pushed late.)

web/TryingSmoothScrolling written at 01:05:44; Add Comment

2017-10-06

Spam issues need to be considered from the start

A number of Google's issues from the spammer I talked about yesterday come down to issues of product design, where Google's design decisions opened them up to being used by a spammer. I considered these issues mistakes, because they fundamentally enable spammers, but I suspect that Google would say that they are not, and any spam problems they cause should get cleaned up by Google's overall anti-spam and systems that watch for warning signs and take action. Well, we've already seen how that one works out, but there's a larger problem here; this is simply the wrong approach.

In a strong way, anti-spam considerations in product design are like (computer) security. We know that you don't get genuinely secure products by just building away as normal and then bringing in a security team to spray some magic security stuff over the product when it's almost done; this spray-coated security approach has been tried repeatedly and it fails pretty much every time. The way you get genuinely secure products is considering security from the very start of the product, when it is still being designed, and then continuing to pay attention to security (among other things) all through building the product, at every step along the way. See, for example, Microsoft's Security Development Lifecycle, which is typical of the modern approach to building secure software.

(That you need to take a holistic approach to security is not really surprising; you also need to take a holistic approach to things like performance. If no one cares about performance until the very end, you can easily wind up digging yourself into a deep performance hole that is very painful and time-consuming to get out of, if it's even feasible to do so.)

Similarly, you don't get products that can stand up to spammers by designing and building your products without thinking about spam, and then coming along at the end to spray-coat some scanning and monitoring magic on top and add an abuse@... address (or web form). If you want products that will not attract spammers like ants to honey, you need to be worrying about how your products could be abused right from the start of their design. By now the basics of this are not particularly difficult, because we have lots of painful experience with spammers (eg).

spam/AntiSpamFromTheStart written at 01:27:17; Add Comment

2017-10-05

Google is objectively running a spammer mailing list service

If you are a mailing list service provider, there are a number of things that you need to do, things that fall under not so much best practices as self defense. My little list is:

  • You shouldn't allow random people you don't know and haven't carefully authenticated to set up mailing lists that you'll send out for them.
  • If you do let such people set up mailing lists, you should require that all email addresses added to them be explicitly confirmed with the usual two step 'do you really want to subscribe to this' process.
  • If you actually allow random people you don't know to add random email addresses to their mailing lists, you absolutely must keep a very close eye on the volume of rejections to such mailing lists. A significant rate of rejections is an extremely dangerous warning sign.

Google, of course, does none of these, perhaps because doing any of these would require more people or reduce 'user engagement', also known as the number of theoretical eyeballs that ads can be shown to. The result is predictable:

2017-10-04 08:19 H=mail-io0-f199.google.com [209.85.223.199] [...] F=<emails1+[...]@offpay.party> rejected [...]
2017-10-04 08:26 H=mail-ua0-f200.google.com [209.85.217.200] [...] F=<emails5+[...]@offpay.party> rejected [...]
2017-10-04 08:31 H=mail-vk0-f71.google.com [209.85.213.71] [...] F=<emails7+[...]@offpay.party> rejected [...]
2017-10-04 08:31 H=mail-pf0-f198.google.com [209.85.192.198] [...] F=<emails7+[...]@offpay.party> rejected [...]
2017-10-04 08:32 H=mail-qk0-f198.google.com [209.85.220.198] [...] F=<emails8+[...]@offpay.party> rejected [...]
2017-10-04 08:39 H=mail-qk0-f199.google.com [209.85.220.199] [...] F=<emails9+[...]@offpay.party> rejected [...]
2017-10-04 08:40 H=mail-it0-f70.google.com [209.85.214.70] [...] F=<emails9+[...]@offpay.party> rejected [...]
2017-10-04 08:40 H=mail-io0-f200.google.com [209.85.223.200] [...] F=<emails11+[...]@offpay.party> rejected [...]
2017-10-04 08:40 H=mail-io0-f197.google.com [209.85.223.197] [...] F=<emails11+[...]@offpay.party> rejected [...]
2017-10-04 08:41 H=mail-ua0-f197.google.com [209.85.217.197] [...] F=<emails11+[...]@offpay.party> rejected [...]
2017-10-04 11:57 H=mail-vk0-f69.google.com [209.85.213.69] [...] F=<emails15+[...]@offpay.party> rejected [...]
2017-10-04 12:06 H=mail-pg0-f71.google.com [74.125.83.71] [...] F=<emails16+[...]@offpay.party> rejected [...]
2017-10-04 12:09 H=mail-qt0-f200.google.com [209.85.216.200] [...] F=<emails18+[...]@offpay.party> rejected [...]

That's just from today; we have more from yesterday, October 2nd, and October 1st. They're a mixture of RCPT TO rejections (generally due to 'no such user') and post-DATA rejections from our commercial anti-spam system laughing very loudly at the idea of accepting the email. Many other copies made it through, not because they weren't seen as spam but because they were sent to users who hadn't opted into our SMTP time spam rejection.

Google has deliberately chosen to mix all of its outgoing email into one big collection of mail servers that third parties like us can't easily tell apart. For Google, this has the useful effect of forcing recipients to choke down much of Google's spam because of GMail, instead of letting people block it selectively. In this case, we have some trapped mail headers that suggest that this is something to do with Google Groups, which is of course something that we've seen before, with bonus failures. That was about six years ago and apparently Google still doesn't care.

(Individual people at Google may care, and they may be very passionate about caring, but clearly the corporate entity that is Google doesn't care. If it did care, this would not happen. At a minimum, there would be absolutely no way to add email addresses to any form of mailing list without positive confirmation from said addresses. Instead, well, it's been six years and this stuff is still happening.)

PS: My unhappy reactions yesterday on Twitter may have produced some results, which is better than nothing, but it should go without saying that that's not exactly a good solution to the general issue. Spammers are like ants; getting rid of one is simply dealing with the symptoms, not the problems.

spam/GoogleSpammerMailingListProvider written at 00:31:56; Add Comment

2017-10-04

My new worry about Firefox 56 and the addons that I care about

In light of Firefox 57 and the state of old addons, where my old addons don't work in Firefox Nightly (or didn't half a month ago), my plan is to use Firefox 56 for as long as possible. By 'as long as possible', I mean well beyond when Firefox 56 is officially supported; I hope to keep using it until it actively breaks or is too alarmingly insecure for me. Using an actual released version of Firefox instead of a development version is an odd feeling, but now that it's out, I'm seeing a trend in my addons that is giving me a new thing to worry about. Namely, a number of them are switching over to being WebExtensions addons as preparation for Firefox 57.

This switch would be nothing to worry about if Firefox's WebExtensions API was complete (and Firefox 56 had the complete API), but one of the problems with this whole switch is exactly that Firefox's WE API is far from complete. There are already any number of 'we need feature X' bug reports from various extensions, and I'm sure that more are coming as people attempt to migrate more things to WebExtensions so that they'll survive the transition to Firefox 57. Unless things go terribly wrong, this means that future versions of Firefox are going to pick up more and more WebExtensions APIs that aren't in Firefox 56, and addon authors are going to start using those APIs.

In short, it seems quite likely that sticking with Firefox 56 is also going to mean sticking with older versions of addons. Possibly I'll have to manually curate and freeze my addons to pre WebExtensions versions in order to get fully working addons (especially ones that don't leak memory, which is once again a problem with my Firefox setup, probably because recent updates to various addons have problems).

(Fortunately Mozilla generally or always makes older versions of addons installable on addons.mozilla.org if you poke the right things. But I'm not looking forward to bisecting addon versions to find ones that work right and don't leak memory too fast.)

The optimistic view of the current situation with Firefox addons is that the WebExtensions versions of popular addons are basically beta at this point because of relatively low use. With Firefox 56 released, people moving more aggressively to be ready for Firefox 57, and the (much) greater use of WebExtensions addons, the quality of WE addons will go up fairly rapidly even with the current Firefox WebExtensions APIs. This could give me stable and hopefully non-leaking addons before people move on and addons become incompatible with Firefox 56.

web/Firefox56AddonWorry written at 02:03:06; Add Comment

2017-10-03

Some thoughts on having both Python 2 and 3 programs

Earlier, I wrote about my qualms about using Python 3 in (work) projects in light of the extra burden it might put on my co-workers if they had to work on the code. One possible answer here is that it's possible both to use Python 3 features in Python 2 and to write code that naturally runs unmodified under both versions (as I did without explicitly trying to). This is true, but there's a catch and that catch matters in this situation.

The compatibility between Python 2 and Python 3 is not symmetric. If you write natural Python 3 code, it can often run under Python 2, sometimes with __future__ imports. However, if you write natural Python 2 code it will not run under Python 3, unless your code completely avoids at least print as a statement and mixing tabs and spaces. A Python 3 programmer who knows very little about Python 2 and who simply writes natural code can produce a program that runs unaltered under Python 2 and can probably modify a Python 2 program without having it blow up in their face. But a Python 2 programmer who tries to work on a Python 3 program is quite possibly going to have things explode. They could get lucky, but all it takes is one print statement and Python 3 is complaining. This is true even if the original Python 3 code is careful to be Python 2 compatible (it uses appropriate __future__ imports and so on).

Since there are Python 3 features that are simply not available in Python 2 even with __future__ imports, a Python 3 programmer can still wind up blowing up a Python 2 program. But as someone who's now written both Python 2 and Python 3 code (including some that wound up being valid Python 2 code too), my feeling is that you have to go at least a bit out of your way in straightforward code to wind up doing this. By contrast, it's very easy for a Python 2 programmer to use Python 2 only things in code, partly because one of them (print statements) is a long standing standard Python 2 idiom. A Python 2 programmer is relatively unlikely to produce code that also runs on Python 3 unless they explicitly try to (which requires a number of things, including awareness that there is even a Python 3).

So if you have part-time Python 3 programmers and some Python 2 programs, you'll probably be fine (and you can increase the odds by putting __future__ imports into the Python 2 programs in advance, so they're fully ready for Python 3 idioms like print() as a function). If you have part-time Python 2 programmers and some Python 3 programs, you're probably going to have to keep an eye on things; people may get surprises every so often. Unfortunately there's nothing you can really do to make the Python 3 code able to deal with Python 2 idioms like print statements.

(In the long run it seems clear that everyone is going to have to learn about Python 3, but that's another issue and problem. I suspect that many places are implicitly deferring it until they have no choice. I look forward to an increasing number of 'what to know about Python 3 for Python 2 programmers' articles as we approach 2020 and the theoretical end of Python 2 support.)

python/MixingPython2And3Programs written at 00:19:38; Add Comment

2017-10-02

My experience with using Fedora 26's standard font rendering (and fonts)

A bit over a month ago I wrote about my font rendering dilemma in Fedora 26, where my fontconfig user tweaks basically stopped working and I considered switching to the standard FreeType rendering rather than try to fix them. Leah Neukirchen solved one side of the dilemma for me on the spot in the comments, by telling me how to force FreeType to revert to my Fedora 25 rendering, but in the end I decided to stay with the standard system rendering as an experiment. At this point I consider the results of the experiment to be in, and I think the standard system rendering is the better, more readable one.

I rapidly got used to the new look of my xterms and so on, as I expected that I would. Some of our older systems are still using older FreeType versions and on these, even a default font rendering comes out basically the same as my old Fedora 25 one. On the infrequent occasions that I use these systems, their xterms now both look odd to me and also seem to be less easily readable than the regular xterms beside them with the darker, thicker font rendering from modern FreeType versions. This is only anecdotal, but looking at the old rendering periodically makes me happier to have switched to FreeType's modern rendering. I feel that I made the right choice.

The comments on my original article pointed me to this article on FreeType's new v40 interpreter; this interpreter change is the difference between Fedora 25's rendering and Fedora 26's. That article caused a cascade of yak shaving when I decided to switch to Fedora 26's standard rendering, because it got me to change my Firefox from using Georgia (at 16 points, I believe) to using the Fedora standard sans serif font at 15 points. This change in fonts and font sizes has wound up with me shuffling around the text zoom level on any number of sites, and not always in predictable directions. Some sites that I had increased the size on now don't need it any more; other sites now need it when they didn't need it before. The result is probably more readable, partly because I've been biased towards 'if in doubt, increase the text size'.

(A huge number of websites believe in tiny fonts for reasons that I don't understand. It's certainly not good typography, since the websites of typographers and many design people that I've seen tend to have fairly large type sizes, larger even than I'd pick.)

Although I haven't dug into it in depth, my impression is that this FreeType font rendering change has caused a number of other programs to change their text sizing and text rendering. I think Chrome now uses slightly different text sizes on web pages, for example; perhaps the FreeType v40 engine spaces things slightly differently. Or perhaps I'm just less willing to accept marginally small font sizes these days, so I'm being more picky.

(I may need to reset font preferences in other programs, such as Chrome, as I probably set any number of things to use Georgia a long time ago. For a while it was my default proportional spaced font, especially for web related things.)

linux/Fedora26StandardFontRendering written at 01:22:30; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.