Limiting the size of things in a filesystem is harder than it looks
Suppose, not entirely hypothetically, that you want to artificially limit the size of a filesystem or perhaps something within it, for example the space used by a particular user. These sort of limits usually get called quotas. Although you might innocently think that enforcing quotas is fairly straightforward, it turns out that it can be surprisingly complicated and hard even in very simple filesystems. As filesystems become more complicated, it can rapidly become much more tangled than it looks.
Let's start with a simple filesystem with no holes in files and us only wanting to limit the amount of data that a user has (not filesystem metadata). If the user tries to write 128 Kb to some file, we already need to know where in the file it's going. If the 128 Kb is entirely overwriting existing data, the user uses no extra space, if it's being added to the end of the file, they use 128 Kb more space, and if it partly overlaps with the end of the file, they use less than 128 Kb. Fortunately the current size of a file that's being written to is generally very accessible to the kernel, so we can probably know right away whether the user's write can be accepted or has to be rejected because of quota issues. Well, we can easily know until we throw multiple CPUs into the situation, with different programs on different CPUs all executing writes at once. Once we have several CPUs, we have to worry about synchronizing our information on how much space the user is currently using.
Now, suppose that we want to account for filesystem metadata as well, and that files can have unallocated space in the middle of themselves. Now the kernel doesn't know how much space 128 Kb of file data is going to use until it's looked at the file's current indirect blocks. Writing either after the current end of the file or before it may require allocating new data blocks and perhaps new indirect blocks (in extreme cases, several levels of them). The existing indirect blocks for the file may or may not already be in memory; if they aren't, the kernel doesn't know whether it can accept the write until it reads them off disk, which may take a while. The kernel can optimistically accept the write, start allocating space for all of the necessary data and metadata, and then abort if it runs into a quota limit by the end. But if it does this, it has to have the ability to roll back all of those allocations it may already have done.
(Similar issues come up when you're creating or renaming files and more broadly whenever you're adding entries to a directory. The directory may or may not have a free entry slot already, and adding your new or changed name may cause a cascade of allocation changes, especially in sophisticated directory storage schemes.)
Features like compression and deduplication during writes complicate this picture further, because you don't know how much raw data you're going to need to write until you've gone through processing it. You can even discover that the user will use less space after the write than before, if they replace incompressible unique data with compressible or duplicate data (an extreme case is turning writes of enough zero bytes into holes).
If the filesystem is a modern 'copy on write' one such as ZFS, overwriting existing data may or may not use extra space even without compression and deduplication. Overwriting data allocates a new copy of the data (and metadata pointing to it), but it also normally frees up the old version of the data, hopefully giving you a net zero change in usage. However, if the old data is part of a snapshot or otherwise referenced, you can't free it up and so an 'overwrite' of 128 Kb may consume the same amount of space as appending it to the file as new data.
Filesystems with journals add more issues and questions, especially the question of whether you add operations to the journal before you know whether they'll hit quota limits or only after you've cleared them. The more you check before adding operations to the journal, the longer user processes have to wait, but the less chance you have of hitting a situation where an operation that's been written to the journal will fail or has to be annulled. You can certainly design your journal format and your journal replay code to cope with this, but it makes life more complicated.
At this point you might wonder how filesystems that support quotas ever have decent performance, if checking quota limits involves all of this complexity. One answer is that if you have lots of quota room left, you can cheat. For instance, the kernel can know or estimate the worst case space usage for your 128 Kb write, see that there is tons of room left in your quota even in the face of that, and not delay while it does further detailed checks. One way to deal with the SMP issue is to keep a very broad count of how much outstanding write IO there is (which the kernel often wants anyway) and not bother with synchronizing quota information if the total outstanding writes are significantly less than the quota limit.
(I didn't realize a lot of these lurking issues until I started to actually think about what's involved in checking and limiting quotas.)
TLS server certificate verification has two parts (and some consequences)
One of the unusual and sometimes troublesome parts of TLS is that verifying a TLS server's certificate actually has two separate parts, each critical. The first part is verifying that you have a valid certificate, one that is signed by a certificate chain that runs up to a known CA, hasn't expired, hasn't been revoked (or is asserted as valid), perhaps appears in a CT log, and so on. The second, equally critical part is making sure that this valid certificate is actually for the server you are talking to, because there are a lot of valid certificates out there and more or less anyone can get one for some name. Failing to do this opens you up to an obvious and often trivial set of impersonation attacks.
However, there is an important consequence of this for using TLS outside of the web, which is that you must know the name of the server you're supposed to be talking to in order to verify a server's TLS certificate. On the web, the server name is burned into the URL; you cannot make a request without knowing it (even if you know it as an IP address). In other protocols that also use TLS, this may not be true or it may not be clear what name for the server you should use (if there are levels of aliases, redirections, and so on going on, possibly including DNS CNAMEs).
The corollary of this is that it's much harder to use TLS with a
protocol that doesn't
give you start with a server
name somehow. If the protocol is 'I broadcast a status packet and
something responds' or 'someone gives me a list of IP addresses of
resources', you sort of have a problem. Sometimes you can resolve
this problem by fiat, for example by saying 'we will do a DNS PTR
query to resolve this IP address to a name and then use the name',
and sometimes you can't even get that far.
(You can also say 'we will not verify the server name', but then you only have part of TLS.)
That's all very abstract, so let's go with two real examples. The first one is 801.2X network authentication, which I tangled with recently. When I dealt with this on my phone, I was puzzled why various instructions said to make sure that the TLS certificate was for a specific name (and I even wondered if this meant that the TLS certificate wasn't being verified at all). But the reason you have to check the name is that the 801.2X protocol doesn't have any trustworthy way of asserting what authentication server should be called; almost by definition, you can't trust anything the 801.2X server itself claims about what it should be called, and the only other information you have is (perhaps) the free-form name of the network (as, for example, the wireless SSID). The server name and the trust has to be established out of band, and on phones that's through websites with instructions.
(On Linux you have to explicitly configure the expected server name in advance if you want security.)
The second example is wanting to use DNS over TLS or DNS over HTTPS to talk to the DNS servers you find through DHCP or have in a normal resolv.conf. In both of these cases, the protocol and the configuration file only specify the DNS servers by IP address, with no names associated with them (and the IPs may well be RFC 1918 private addresses). It's possible to turn this into a server name if you want to through DNS, but you wind up basically having to trust what the DNS server is telling you about what its TLS server name should be.
(You can augment DHCP and resolv.conf with additional information about the server names you should look for, but then you need to define the format and protocol for this information, and you need more moving parts in order to get your TLS protected DNS queries.)
PS: Sometimes the first part of TLS is sufficiently important by itself, because blocking passive eavesdropping can be a significant win. But it's at least questionable, and you need to consider your threat models carefully.
The problem of 'triangular' Network Address Translation
In my entry on our use of bidirectional NAT and split horizon DNS, I mentioned that we couldn't apply our bidirectional NAT translation to all of our internal traffic in the way that we can for external traffic for two reasons, an obvious one and a subtle one. The obvious reason is our current network topology, which I'm going to discuss in a sidebar below. The more interesting subtle reason is the general problem of what I'm going to call triangular NAT.
Normally when you NAT something in a firewall or a gateway, you're in a situation where the traffic in both directions passes through you. This allows you to do a straightforward NAT implementation where you only rewrite one of the pair of IP addresses involved; either you rewrite the destination address from you to the internal IP and then send the traffic to the internal IP, or you rewrite the source address from the internal IP to you and then send the traffic to the external IP.
However, this straightforward implementation breaks down if the return traffic will not flow through you when it has its original source IP. The obvious case of this is if a client machine is trying to contact a NAT'd server that is actually on its own network. It will send its initial packet to the public IP of the NAT'd machine and this packet will hit your firewall, get its destination address rewritten, and then passed to the server. However, when it replies to the packet, the server will see a destination IP on its local network and just send it directly to the client machine. The client machine will then go 'who are you?', because it's expecting the reply to come from the server's nominal public IP, not its internal one.
(Asymmetric routing can also create this situation, for instance if the machine you're talking to has multiple interfaces and a route to you that doesn't go out the firewall-traversing one.)
In general the only way to handle triangular NAT situations is to force the return traffic to flow through your firewall by always rewriting both IP addresses. Unfortunately this has side effects, the most obvious one being that the server no longer gets the IP address of who it's really talking to; as far as it's concerned, all of the connections are coming from your firewall. This is often less than desirable.
(As an additional practical issue, not all NAT implementations are very enthusiastic about doing such two-sided rewriting.)
Sidebar: Our obvious problem is network topology
At the moment, our network topology basically has three layers; there is the outside world, our perimeter firewall, our public IP subnets with various servers and firewalls, and then our internal RFC 1918 'sandbox' subnets (behind those firewalls). Our mostly virtual BINAT subnet with the public IPs of BINAT machines basically hangs off the side of our public subnets. This creates two topology problems. The first topology problem is that there's no firewall to do NAT translation between our public subnets and the BINAT subnet. The larger topology problem is that if we just put a firewall in, we'd be creating a version of the triangular NAT problem because the firewall would have to basically be a virtual one that rewrote incoming traffic out the same interface it came in on.
To make internal BINAT work, we would have to actually add a network layer. The sandbox subnet firewalls would have to live on a separate subnet from all of our other servers, and there would have to be an additional firewall between that subnet and our other public subnets that did the NAT translation for most incoming traffic. This would impose additional network hops and bottlenecks on all internal traffic that wasn't BINAT'd (right now our firewalls deliberately live on the same subnet as our main servers).
Programs that let you jump around should copy web browser navigation
As part of their user interface, many programs these days have some way to jump around (or navigate around) the information they display (or portions of it, such as a help system). Sometimes you do this by actually clicking on things (and they may even look like web links); sometimes you do this through keyboard commands of various sorts.
(The general common form of these is that you are looking at one thing, you perform an action, and you're now looking at another thing entirely. Usually you don't know what you're going to get before you go to it.)
Programs have historically come up with a wide variety of actual interfaces for this general idea. Over the years, I have come to a view on how this should work, and that is the obvious one; jumping around in any program should work just like it does in web browsers, unless the program has a very good reason to surprise people. Your program should work the same as browsers both in the abstract approach and also, ideally, in the specific key and mouse bindings that do things.
There are two reasons for this. The first reason is simply that people already spend a lot of time navigating around in browsers, so they are very familiar with it and generally pretty good at it. If you deviate from how browsers behave, people will have to learn your behavior instead of being able to take advantage of their existing knowledge. The second and more subtle reason is that browsers have spent a lot of time working on developing and refining their approach to navigation, almost certainly more time than you have. If you have something quite different than a web page environment, perhaps you can still design a better UI for it despite your much less time, but the more your setup resembles a series of web pages, the less likely that is.
At this point you might ask what the general abstract approach of web browser navigation is. Here are my opinions:
- You can move both back and then forward again through your sequence
of jumps, except for rare things which cannot be repeated. In a
normal UI, non-repeatable things should probably use a completely
different interface from regular 'follow this thing' jumps.
- The sequence is universal by default in that it doesn't matter what
sort of a forward jump you made and regardless of where it took
you, you can always go back with a single universal action, and
then forward again by another one. You can add extra sorts of
back and forward traversal that only act on some sorts of jumps
if you want to, but the default should be universal.
(As far as the destination goes, notice that browsers have a universal 'back' action regardless of whether the link was to another anchor on the same web page or to an entirely different web page.)
- By default, the sequence of jumps is not a global one but is specific
to your current pane, whatever a pane is in your application
(a window, a tab, a section of the window). What you do in one pane
should not contaminate the forward/back sequence of another pane,
because it's generally harder to keep track of a global, cross-pane
history and remember where 'forward' and 'back' operations will
take you in it.
(There are clever uses of a global sequence and you can offer one, but it shouldn't be the default.)
- It should be possible to have the destination of a jump not overwrite the current stuff you're looking but instead open in another pane. This should generally not be the default, but that's somewhat situational.
There are probably other aspects of browser navigation that I haven't thought of simply because I'm so accustomed to them.
There are still reasons to use different interfaces here under the right circumstances, but you should be quite sure that your different interface really is a significant advantage and that a decent amount of your target audience will use your program a lot. Editors are generally a case of the latter, but I'm not convinced that most of them are a case of the former.
(At this point in time I suspect that this is a commonly held and frequently repeated position, but I feel like writing it down anyway.)
Some limitations of wifi MAC address randomization
In a comment on my entry on an Android-based gadget with aggressive wifi MAC randomization, Jukka wrote:
The DHCP issues notwithstanding, I applaud this kind of MAC randomization. Easy to do manually, of course, but I haven't realized that also off-the-shelf products are doing it. Good for them: so-called WiFi tracking is nowadays rampant in public spaces. They're even combining this tracking with facial recognition and whatnot; cf. <link>
I sure hope that your university is not among these "smart offices"...
My university isn't (and hopefully never will be; I think a lot of people would object), but part of my reaction to Jukka's mention is that if it was, wifi MAC randomization wouldn't protect me because all of our wifi networks require authentication, generally through things like 802.1x. The university doesn't need to even try to track me indirectly by device MAC address when it can track me directly by authentication.
(There are no anonymous wifi networks at work, or at least there aren't supposed to be; it's a university policy that all wireless network access has to be identifiable and authenticated in some way.)
This points directly to the fundamental limitation of MAC randomization, which is that MAC randomization conceals your device's identity but not yours. Actively authenticated network access is one way to identify and track you, but it's far from the only one. For instance, a public wifi network with a captive portal could drop a cookie on you the first time you visit and then try to read it back out on later visits, so it could associate your current MAC with past ones (and such a cookie would be a first-party cookie, so not subject to many browser precautions). It could even sell this as a convenience; if you have your identifying cookie, it'll automatically authorize you since it has an acceptable indication that you've already agreed to the network's TOS.
(Various intrusive forms of traffic monitoring can be used to create some sort of fingerprint of your activity and your device's signature actions. Some of this will get harder when encrypted SNI rolls out, since then traffic monitoring will have to work harder to identify what HTTPS sites your device is checking in with.)
This doesn't make MAC randomization completely pointless on authenticated networks and other such environments; if nothing else, it potentially hides information on how many distinct devices you have (although that may be revealed if the devices have other identifying quirks, such as fixed DHCP 'host name' fields).
A wifi MAC address randomization surprise in a new Android gadget
I recently picked up a new Android-based gadget and discovered, to my unpleasant surprise, that it has what I can best describe as "unusually aggressive" wireless MAC address randomization. The most basic form of MAC randomization is to randomize the MAC address that you use before you're connected to a wireless network, which prevents people from re-identifying your device as you move around. To be more thorough you can then use a different MAC per wireless network (SSID), so that people can't easily associate you across different wireless networks. A really aggressive setting is to use a different random MAC every time you connect again to a known network; this keeps the network from tracking you across time.
(This article gives the example of airport wifi as a time where you might want to use a different random MAC on every connection. In general, any public wifi is probably a good usage case for that. See also this and the Arch wiki.)
This particular Android gadget is even more aggressive than this. Not only does it use a random MAC address, it changes the address on a regular basis and does so even when connected to a wireless network and holding a DHCP lease. In fact I have DHCP logs showing it attempting to preemptively renew a non-expired DHCP lease using a different MAC address than it used to get the lease (this doesn't go well, since as far as the DHCP server is concerned the IP address is taken by someone else). The vendor's support documentation links to this Android 9 developer page on MAC randomization, but that seems to only be talking about stable per-SSID MAC address randomization, not this sort of random and actively changing MAC address.
This aggressive randomization is also potentially pointless, because as part of its DHCP requests the gadget broadcasts a DHCP host name of 'android-<some fixed hex digits>'. If this is unique per device, it's an easy tracking identifier, and even if not it may be more tracking than you'd like. This particular gadget also only talks to wireless networks that you specifically tell it to, and generally those are going to be high-trust ones; aggressive address randomization for your home wireless network seems somewhere between pointless and problematic (if it causes issues like DHCP pool exhaustion as the gadget churns through DHCP leases).
(Sadly this really is a DHCP host name, not a DHCP client identifier. The normal ISC DHCP server can assign static IPs to the latter but not the former.)
As a sysadmin, I hope that this sort of very aggressive MAC address randomization doesn't become common among Android devices. Our departmental wireless network mostly requires stable MAC addresses, and on top of that we only have so many free DHCP leases (although we could expand the pool, since we're using a /16 for the network as a whole). Android devices that change their MAC all the time would give our people a fair amount of heartburn, and there's not much we can do about it without a major change in our wireless architecture (which is unlikely).
(Registering a stable MAC is optional on our wireless network, but if your device doesn't have a registered one, the only thing it's allowed to talk to is our VPN servers. Registered devices can talk to the outside world too.)
PS: This particular gadget uses Android as a substrate; it runs custom software on custom hardware, and the fact that it's running on top of Android is barely mentioned in the documentation and mostly only discoverable through things like network scanning or finding out that it supports USB MTP. At first its use of Android surprised me, but then I realized that Android has become a perfectly respectable embedded OS and there's a wide ecology of people who make Android-capable hardware and peripherals that will connect to it.
(This elaborates on some grumpy Tweets of mine.)
Text UIs and the problem of discoverability
After setting up GNU Emacs to use LSP for Go and Python, I've been digging into what I can do through lsp-mode by doing things like finding out keybindings and reading up on features. One of the questions I wound up asking myself was if I was ever going to use various of these features, or whether they'd suffer the same fate as a bunch of Go programs and Emacs packages I had installed for similar purposes but then never used, such as gorename (which is directly supported by go-mode).
Part of my problem here, both in the past and today (with things
which I have installed because of an interest of mine), is simply remembering
that these things exist and how to invoke them. I don't do tasks
like renaming Go identifiers or paging through the past versions
of files in git frequently enough to remember these things off the
top of my head. In fact, I didn't even remember that I had some of
these Go Emacs addons installed until I switched to LSP based Go
editing and had to revisit that area of my
My outsider's impression is that I wouldn't have as much of this problem in a full bore GUI IDE, simply because with a GUI there's a lot more places to put reminders that things exist (and many of these reminders can be contextual, which cuts down how many are shown by excluding inapplicable ones). Mouse-driven GUIs also offer a larger set of options for how to interact with the program and expose things; you have text input, just like a text UI, but also mouse buttons popping up menus, hovering the mouse over things, and so on.
(Of course, a GUI can make bad or limited use of text input, and GNU Emacs can to some extent make use of the mouse and of GUI elements if you're running in the right environment. That GNU Emacs has a 'LSP' menu of things you can do with lsp-mode is actually quite helpful; I probably wouldn't remember the M-x invocations for half of them, much less know that they're possible.)
People have written some GNU Emacs packages to help with this, such as which-key (which I'm now using). But I think it's a genuinely hard problem in a text UI; there's only so many places to put things and prompt people, and you can't do it too often or your UI is far too busy. My GNU Emacs setup probably has hundreds of useful things I can run with M-x under various circumstances, but I'm extremely unlikely to discover them through ordinary use or know where to look for them in the way that I might learn to check, say, a mouse button popup menu in a GUI IDE.
(Similar issues apply to vim, of course. There are tons of useful vim commands that I don't know or can't remember; every so often I discover a new one, and sometimes it sticks.)
PS: I think that good text UIs are often great for experienced regular users, because their constraints push them towards efficiency and direct invocation of things. If you can remember all of the vim commands or GNU Emacs keybindings and M-x functions, you can do a lot very fast, and if you use vim or GNU Emacs all the time for something you'll wind up remembering all the things you use.
(This is related to but not quite the same thing as how custom interfaces benefit people who use them regularly, while standardized interfaces help infrequent users.)
One core problem with DNSSEC
As a sysadmin, my view of DNSSEC is that life is too short for me to debug other people's configuration problems.
One fundamental problem of DNSSEC today is that it suffers from the false positive problem, the same one that security alerts suffer from. In practice today, for almost all people almost all of the time, a DNSSEC failure is not a genuine attack; it is a configuration mistake, and the configuration mistake is almost never on the side making the DNS query. This means that almost all of the time, DNSSEC acts by stopping you from doing something safe that you want to do and further, you can't fix the DNSSEC problem except by turning off DNSSEC, because it's someone else's mistake (in configuration, in operation, or in whatever).
This is not a recipe for a nice experience for actual people; this is a recipe for mathematical security. As such, it is a security failure. Any security system that is overwhelmed by false positives has absolutely failed to tackle the real problem, which is that your security system must be useful. Security systems that are not useful get turned off, and that is exactly what is happening with DNSSEC.
Another big problem with DNSSEC today, one that magnifies the core problem, is that it has terrible visibility and diagnostics (at least in common implementations). If there is a DNSSEC related failure, generally what happens is that you don't get DNS answers. You don't get told that what has failed is DNSSEC and you don't get a chance to bypass it and proceed anyway (however dangerous that choice might be in practice); instead you mysteriously fail. Mysterious failures are what you could politely call a terrible user experience. Mysterious failures that are not your fault and that you cannot fix (except by turning off DNSSEC) are worse.
(DNSSEC advocates may protest that this is not how it is supposed to work. I am afraid that security measures exist in the real world, where how it actually works is what actually matters. Once again, security is not mathematics.)
PS: To the extent that people are experiencing DNS attacks, the modern Internet world has chosen to deal with it in another way, through HTTPS and TLS in general.
(I have written before about my older experiences with DNSSEC and how I thought DNSSEC would have to be used in the real world. Needless to say, the DNSSEC people have continued with the program of 'everyone must get it right all the time, no errors allowed, hard failures for everyone' since back then in 2014. For my views on DNSSEC in general, well, see this.)
Non-uniform caches are harder to make work well
One way to view what can happen to your Unix system when you don't have swap space is that it's one more case of the Unix virtual memory system facing additional challenges because it is what I will call a non-uniform cache. In a uniform cache, all entries come from the same source at the same speed (more or less), can naturally be accessed as fast and as frequently as each other, and can be evicted or freed at the same speed and volume. In a non-uniform cache, some or many of those are not true. A Unix system without swap is an extreme case, since one sort of pages cannot be evicted from RAM at all, but Unix has experienced problems here before, for example when it introduced a unified buffer cache and discovered that certain sorts of pages could naturally be accessed a lot faster than others.
One source of problems is that a non-uniform cache mingles together two factors when you observe pressure on it. In a uniform cache, the observed pressure on elements in the cache is a true reflection of the real needs of the things using the cache. In a non-uniform cache, the pressure you observe is some combination of how much elements are really needed and how readily they can be fetched, accessed, and dropped. To work out the true pressure and balance the cache properly, the system needs some way to split these two factors apart again, generally by knowing or working out the impact of the various sorts of non-uniformity.
(Then, of course, it needs to be able to balance the cache at all. Having no swap space is an extreme case, but even with swap space you can usually only evict so many anonymous pages from RAM.)
Automatically coping with or working out the impact of non-uniformity is a hard problem, which is one reason that tuning knobs proliferate in non-uniform caches (another is that punting the problem to the cache's users is a lot easier than even trying). Another surprisingly hard problem seems to be realizing that you even have a non-uniform cache at all, or at least that the non-uniformity is going to matter and how it will manifest (many caches have some degree of non-uniformity if you look at them closely enough).
(This probably shouldn't be surprising; in practice, it's impossible to fully understand what complex systems are doing in advance.)
One corollary of this for me is that if I'm creating or dealing with a cache, I should definitely think about whether it might be non-uniform and what effects that might have. It's tempting to think that your cache is sufficiently uniform that you don't have to dig deeper, but it's not always so, and ignoring that a cache is non-uniform is a great way to get various sorts of bad and frustrating performance under load.
(Of course if I really care I should profile the cache for the usual reasons.)
The practical difference between CPU TDP and observed power draw illustrated
Last year, in the wake of doing power measurements on my work machine and my home machine, I wrote about how TDP is misleading. Recently I was re-reading this Anandtech article on the subject (via), and realized that I actually have a good illustration of the difference between TDP and power draw, and on top of that it turns out that I can generate some interesting numbers on the official power draw of my home machine's i7-8700K under load.
I'll start with the power consumption numbers for my machines. I have a 95 W TDP Intel CPU, but when
I go from idle to a full load of
mprime -t, my home machine's power
consumption goes from 40 watts to 174 watts, an increase of 134
watts. Some of the extra power consumption will come from the PSU
not being 100% efficient, but based on this review,
my PSU is still at least 90% efficient around the 175 watt level
(and less efficient at the unloaded 40 watt level). Other places
where the power might vanish on the way to the CPU are the various
fans in the system and any inefficiencies in power regulation and
supply that the motherboard has.
(Since motherboard voltage regulation systems get hot under load, they're definitely not 100% efficient. That heat doesn't appear out of nowhere.)
However, there's another interesting test that I can do with my
home machine. Since I have a modern Intel CPU, it supports Intel's
RAPL (Running Average Power Limit) system
and Mozilla has a
rapl program in the Firefox source tree
that will provide a report that is more or less the CPU's power
usage, as Intel thinks it is.
Typical output from
rapl for my home machine under light load, such as
writing this entry over a SSH connection in an
xterm, looks like this
(over 5 seconds):
total W = _pkg_ (cores + _gpu_ + other) + _ram_ W [...] #06 3.83 W = 2.29 ( 1.16 + 0.05 + 1.09) + 1.54 W
When I load my machine up with '
mprime -t', I get this (also over
#146 106.23 W = 100.15 (97.46 + 0.01 + 2.68) + 6.08 W #147 106.87 W = 100.78 (98.04 + 0.06 + 2.68) + 6.09 W
Intel's claimed total power consumption for all cores together is surprisingly close to their 95 W TDP figure, and Intel says that the whole CPU package has increased its power draw by about 100 watts. That's not all of the way to my observed 134 watt power increase, but it's a lot closer than I expected.
(Various things I've read are inconsistent about whether or not I should be expecting my CPU to be exceeding its TDP in terms of power draw under a sustained full load. Also, who knows what the BIOS has set various parameters to, cf. I haven't turned on any overclocking features other than an XMP memory profile, but that doesn't necessarily mean much with PC motherboards.)
As far as I know AMD Ryzen has no equivalent to Intel's RAPL, so I
can't do similar measurements on my work machine. But now that
I do the math on my power usage measurements, both the Ryzen and the Intel increased
their power draw by the same 134 watts as they went from idle to a
mprime -t load. Their different power draw under full load
is entirely accounted for by the Ryzen idling 26 watts higher than