Wandering Thoughts archives


The practical difference between CPU TDP and observed power draw illustrated

Last year, in the wake of doing power measurements on my work machine and my home machine, I wrote about how TDP is misleading. Recently I was re-reading this Anandtech article on the subject (via), and realized that I actually have a good illustration of the difference between TDP and power draw, and on top of that it turns out that I can generate some interesting numbers on the official power draw of my home machine's i7-8700K under load.

I'll start with the power consumption numbers for my machines. I have a 95 W TDP Intel CPU, but when I go from idle to a full load of mprime -t, my home machine's power consumption goes from 40 watts to 174 watts, an increase of 134 watts. Some of the extra power consumption will come from the PSU not being 100% efficient, but based on this review, my PSU is still at least 90% efficient around the 175 watt level (and less efficient at the unloaded 40 watt level). Other places where the power might vanish on the way to the CPU are the various fans in the system and any inefficiencies in power regulation and supply that the motherboard has.

(Since motherboard voltage regulation systems get hot under load, they're definitely not 100% efficient. That heat doesn't appear out of nowhere.)

However, there's another interesting test that I can do with my home machine. Since I have a modern Intel CPU, it supports Intel's RAPL (Running Average Power Limit) system (also), and Mozilla has a rapl program in the Firefox source tree (also) that will provide a report that is more or less the CPU's power usage, as Intel thinks it is.

Typical output from rapl for my home machine under light load, such as writing this entry over a SSH connection in an xterm, looks like this (over 5 seconds):

    total W = _pkg_ (cores + _gpu_ + other) + _ram_ W
#06  3.83 W =  2.29 ( 1.16 +  0.05 +  1.09) +  1.54 W

When I load my machine up with 'mprime -t', I get this (also over 5 seconds):

#146 106.23 W = 100.15 (97.46 +  0.01 +  2.68) +  6.08 W
#147 106.87 W = 100.78 (98.04 +  0.06 +  2.68) +  6.09 W

Intel's claimed total power consumption for all cores together is surprisingly close to their 95 W TDP figure, and Intel says that the whole CPU package has increased its power draw by about 100 watts. That's not all of the way to my observed 134 watt power increase, but it's a lot closer than I expected.

(Various things I've read are inconsistent about whether or not I should be expecting my CPU to be exceeding its TDP in terms of power draw under a sustained full load. Also, who knows what the BIOS has set various parameters to, cf. I haven't turned on any overclocking features other than an XMP memory profile, but that doesn't necessarily mean much with PC motherboards.)

As far as I know AMD Ryzen has no equivalent to Intel's RAPL, so I can't do similar measurements on my work machine. But now that I do the math on my power usage measurements, both the Ryzen and the Intel increased their power draw by the same 134 watts as they went from idle to a full mprime -t load. Their different power draw under full load is entirely accounted for by the Ryzen idling 26 watts higher than the Intel.

TDPAndPowerDraw written at 21:57:19; Add Comment


Wireless networks have names and thus identify themselves

Recently something occurred to me that sounds obvious when I phrase it this way, which is that wireless networks have names. Wireless networks intrinsically identify themselves through their SSID. This is unlike wired networks, which mostly have no reliable identifier (one exception is wired networks using IEEE 802.1X authentication, since clients need to know what they're authenticating to).

This matters because there are a number of situations where programs might want to know what network they're on, so they can treat different networks differently. As a hypothetical example, browsers might want to apply different security policies to different networks. With wireless networking, the browser can at least theoretically know what network it's on; with wired networking, probably it can't (not reliably, at any rate).

(Another case where you might want to behave differently depending on what network you're connected to is DNS over HTTPS. On some networks, not only can you trust the DNS server you've gotten to be not malicious, but you know you need to use it to resolve names properly. On random others, you may definitely know you want to bypass their DNS server in favour of a more trusted DoH server.)

PS: I believe that Windows somewhat attempts to identify 'what network are we on' even on a wired connection, presumably based on various characteristics of the network it gets from DHCP information and other sources (this is apparently called 'network locations'). My experience with this is that it's annoying because it keeps thinking that my virtualized Windows system is moving from network to network even though it isn't. This makes a handy demonstration of the hazards of trying to do this for wired networks, namely that you're relying on heuristics and they can misfire in both directions.

WirelessNetworksNamed written at 00:38:08; Add Comment


Some brief views on iOS clients for Mastodon (as of mid 2019)

I'm on Mastodon and I have both an iPhone and an iPad, so of course I've poked at a number of iOS clients for Mastodon. (I'm restricting my views to Mastodon specifically instead of the Fediverse as a whole because I've never used any of these clients on a non-Mastodon instance.)

I'll put my UI biases up front; what I want is basically Tweetbot for Mastodon. I think that Twitter and Mastodon are pretty similar environments, and Tweetbot has a very well polished interface and UI that works quite well. Pointless departures from the Tweetbot experience irritate me, especially if they also waste some of the limited screen space. Also, I can't say that I've tried out absolutely every iOS Mastodon client.

  • Amaroq is a perfectly good straightforward iPhone Mastodon client that delivers the basic timeline experience that you'd want, and it's free. Unfortunately it's iPhone only. It's not updated all that often so it's not going to be up to date on the latest Mastodon features. As far as I know it only has one colour scheme, white on black (or dark blue, I'm not sure).

  • Tootdon is also a perfectly good straightforward Mastodon client, and unlike Amaroq it works on iPads too. It's free, but it has the drawback that it sends a copy of toots it sees off to its server, where they are (or were) only kept for a month and only used for searches. The Tootdon privacy policy is written in Japanese.

    My memory is that I found Tootdon not as nice as Amaroq on my iPhone, when I was still using both clients.

  • Toot! is the best iPad client that I've found and is pretty good on the iPhone too. It has all of the features you'd expect and a number of little conveniences (such as inlining partial content from a lot of links, which is handy when people I follow keep linking to Twitter; actually visiting Twitter links is a giant pain on a phone, entirely due to how Twitter acts). It's a paid client but, like Tweetbot, I don't regret spending the money.

    Toot! is not perfect on an iPad because it insists on wasting a bit too much space on its sidebar; you can see this in its iPad screen shots. It has a public issue tracker, so perhaps I should raise this grump there.

  • Mast is written by an enthusiastic and energetic programmer with many ideas, which very much shows in the end result. Some people like it a great deal and consider it the best designed iOS client. I think it's a good iPhone client but not particularly great on an iPad, where it wastes too much space all of the time and has UI elements that don't seem to work very well. It's a paid client too.

    (Mast has had several iterations of its UI on the iPad. As I write this, the current UI squeezes the actual toots into a narrow column in order to display at least one other column that I care much less about.)

    I find that Mast is a somewhat alarming client to use, because it has so many features that touching and moving my finger almost anywhere can start to do something. So far I haven't accidentally re-tooted something or the like, but it feels like it's only a matter of time. I really wish there was a way to get Mast to basically calm down.

I think that Mast and Toot! are very close to each other on the iPhone; there are some days where I prefer one and other days when I like the other better. On my iPad it is no contest; the only client I use there is Toot!, because I decided that I wasn't willing to put up with what Tootdon was doing (partly because I wasn't willing to be responsible for sending other people's toots off to some server somewhere under unclear policies).

Both Toot! and Mast have a black on white colour scheme, among others. Mast has many, many customizations and options; Toot! has a moderate amount that cover the important things.

(I have both Mast and Toot! because I bought Mast first based on some people's enthusiastic praise for it, then wound up feeling sufficiently dissatisfied with it on my iPad that I was willing to buy another client.)

PS: I have no opinion on Linux clients; so far I just use the website. This works well at the moment because my Mastodon timeline is low traffic and there's no point in checking it very often.

(The problem with visiting Twitter links in a phone browser is that Twitter keeps popping up interstitial dialogs that try to get me to log in and run a client. Roughly every other time I follow a Twitter link something gets shoved in the way and I have to dismiss it. Needless to say, I hate playing popup roulette when I follow links.)

MastodonIOSClients written at 21:41:10; Add Comment


SMART drive self-tests seem potentially useful, but not too much

I've historically ignored all aspects of hard drive SMART apart, perhaps, from how smartd would occasionally email us to complain about things and sometimes those things would even be useful. There is good reason to be a SMART sceptic, seeing as many of the SMART attributes are underdocumented, SMART itself is peculiar and obscure, hard drive vendors have periodically had their drives outright lie about SMART things, and SMART attributes are not necessarily good predictors of drive failures (plenty of drives die abruptly with no SMART warnings, which can be unnerving). Certain sorts of SMART warnings are usually indicators of problems (but not always), but the absence of SMART warnings is no safety (see eg, and also Blackblaze from 2016). Also, the smartctl manpage is very long.

But, in the wake of our flaky SMART errors and some other events with Crucial SSDs here, I wound up digging deeper into the smartctl manpage and experimenting with SMART self-tests, where the hard drive tries to test itself, and SMART logs, where the hard drive may record various useful things like read errors or other problems, and may even include the sector number involved (which can be useful for various things). Like much of the rest of SMART, what SMART self-tests do is not precisely specified or documented by drive vendors, but generally it seems that the 'long' self-test will read or scan much of the drive.

By itself, this probably isn't much different than what you could do with dd or a software RAID scan. From my perspective, what's convenient about SMART self-tests is that you can kick them off in the background regardless of what the drive is being used for (if anything), they probably won't get too much in the way of your regular IO, and after they're done they automatically leave a record in the SMART log, which will probably persist for a fair while (depending on how frequently you run self-tests and so on).

On the flipside, SMART self-tests have the disadvantage that you don't really know what they're doing. If they report a problem, it's real, but if they don't report a problem you may or may not have one. A SMART self-test is better than nothing for things like testing your spare disks, but it's not the same as actually using them for real.

On the whole, my experimentation with SMART self-tests leaves me feeling that they're useful enough that I should run them more often. If I'm wondering about a disk and it's not being used in a way where all of it gets scanned routinely, I might as well throw a self-test at it to see what happens.

(They probably aren't useful and trustworthy enough to be worth scripting something so that we routinely run self-tests on drives that aren't already in software RAID arrays.)

PS: Much but not all of my experimentation so far has been on hard drives, not SSDs. I don't know if the 'long' SMART self-test on a SSD tests more thoroughly and reaches more bits of the drive internals than you can with just an external read test like dd, or conversely if it's less thorough than a full read scan.

SMARTSelfTestsMaybe written at 21:07:18; Add Comment

By day for July 2019: 8 19 21 29; before July; after July.

Page tools: See As Normal.
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.