Wandering Thoughts

2021-01-22

SMART Threshold numbers turn out to not be useful for us in practice

I was recently reading Rachel Wenzel's Predicting Hard Drive Failure with Machine Learning, and one of the things it taught me is that drive vendors, as part of their drive SMART data, provide a magical 'threshold' number that is supposed to indicate when a SMART attribute has reached a bad value. This is not compared to the raw SMART value, but instead a normalized 'value' that is between 253 (best) and 1 (worst). We collect SMART data for all of our drives, but so far we only use the raw value, which is much more directly usable in almost all cases.

(For example, the raw SMART value for the drive's temperature is almost always its actual temperature in Celsius. The nominal value is not, and the mapping between actual temperature and the nominal value varies by vendor and perhaps drive model. The raw value that smartctl displays can also include more information; the drive temperature for one of my SSDs shows '36 (Min/Max 15/57)', for example. As the smartctl manpage documents, the conversion between the raw value and the normalized version is done by the drive, not by your software.)

The obvious thing to do was to extend our SMART data collection to include both the vendor provided threshold value and the normalized value (and then also perhaps the 'worst' value seen, which is also reported in the SMART data). So today I spent a bit of time working out how to do that in our data collection script, and then before I enabled it (and quadrupled our SMART metrics count in Prometheus), I decided to see if it was useful and could provide us good information. Unfortunately, the answer is no. There are a number of problems with the normalized SMART data in practice.

  • Sometimes there simply is no threshold value provided by the drive; smartctl will display these as '---' values in its output.

  • Uncommonly, all of the normalized numbers are 0 (for the current value, the worst value, and the threshold). Since the minimum for the current value is supposed to be 1, this is a signal that all of them are not useful. This happens both for attributes that don't really have a range, such as the drive's power on time, as well as ones where I'd expect there to be a broad 'good to bad' range, like the 'lifetime writes' on a SSD.

    A closely related version of this is where the current value and the threshold are both zero, but the 'worst' value is some non-zero number (a common one on our drives is 100, apparently often the starting value). This is what a small number of our drives do for 'power on hours'.

  • Not infrequently the threshold is 0, which normally should be the same as 'there is no threshold', since in theory the current value can never reach 0. Even if we ignore that, it's difficult to distinguish a drive with a 0 threshold and a current value dropping to it from a drive where everything is 0 to start with.

  • Sometimes there's a non-zero current value along with a 'worst' value that's larger than it. This 'worst' value can be 253 or another default value like 100 or 200. It seems pretty clear that this is the drive deciding it's not going to store any sort of 'worst value seen', but it has to put something in that data field.

    We also have some drives where the current and worst normalized values for the drive's temperature appear to just be the real temperature in Celsius, which naturally puts the worst above the current.

Once I ran through all of our machines with a data collection script, I found no drives in our entire current collection where the current value was above 0 but below or at the threshold value. We currently have one drive that we very strongly believe is actively failing based on SMART data, so I would have hoped that it turned up in this.

Given the lack of any visible signal combined with all of the oddities and uncertainties I've already discovered, I decided that it wasn't worth collecting this information. We have a low drive failure rate to start with (and a lot of people's experiences suggest that a decent percentage of drives fail with no SMART warnings at all), so the odds that this would ever yield a useful signal that wasn't a false alarm seemed low.

(Sadly this seems to be the fate of most efforts to use SMART attributes. Including Wenzel's; it's worth reading the conclusions section in that article.)

On the positive side, now I have a script to print out all of the SMART attributes for all of the drives on a system in a format I can process with standard Unix tools. Being able to just use awk on SMART attributes is already a nice step forward.

SMARTThresholdNotUseful written at 23:20:22; Add Comment

2021-01-17

One reason to not trust SMART attribute data for consumer drives

In theory, disk drive SMART attributes should give us valuable information on how our disk drives are doing, how many problems they've already experienced, and how likely they are to fail (or how close they are to failure). In practice, there has always been a significant view among sysadmins (and other people) that consumer drives understate SMART attributes or flat out lie, and their failure data is often not trustworthy (although sometimes it's actually useful).

(More neutral and informational SMART attributes like the drive temperature and the number of power on hours and power failures is more commonly seen as trustworthy.)

On the one hand, this seems wrong and perhaps crazy, since the whole purpose of SMART attributes is to provide information on the health of the drive. On the other hand, it's not too hard to see pressures pushing consumer drive vendors in this direction. The reality of life is that a certain number of people who buy their drives will look at the SMART data, see something alarming, and decide to try to return the drive as 'failing' or 'failed'. The more honest that drives are about failure data, the more such people there will be. Even if drive vendors don't accept the returns, merely dealing with them consumes people's time and thus runs up your customer support expenses. It also causes your customers to be unhappy with you, since you're refusing to replace drives that the customer thinks are 'bad'.

(It's no good to say that only sophisticated buyers will look at SMART data, because the reality of life is that any number of helpful people are going to make 'analyze your drive's health through its SMART attributes' applications. These people will have varied views of what is alarming in SMART attributes, which guarantees that some of the programs will be unhappy about drives that the drive vendor considers at least 'not something we'll replace'.)

My perception is that this is more likely to happen with consumer drives, which are bought by a wide variety of people and usually in small quantities by each one, rather than 'enterprise' drives, which tend to at least cost more and are often bought in larger quantities by the purchasers. On the other hand, large organizations with a lot of (enterprise) drives are perhaps more likely to keep a close eye on their drives, develop predictive models, and replace them before they fail outright, including wanting to return them to the drive maker for free replacements.

This sort of pressure is not really present for SMART attributes that are more neutral, such as drive temperature or power on hours, or that are explicitly excluded from warranty replacement, like the amount of data written to the drive. You may want to replace your SSD when the SMART attribute for that gets high, but the drive vendor will not give you a replacement for free, unlike if the drive is 'bad' (and still within warranty).

SMARTConsumerDriveProblem written at 01:07:49; Add Comment

2021-01-15

SMART attributes can predict SSD failures under the right circumstances

In theory, disk drive SMART attributes should give us valuable information on how our disk drives are doing and how likely they are to fail (or how close they are to total failure). Whether or not this happens in practice is a somewhat open question, although in 2016 Backblaze described the SMART attributes they found useful for hard drives. Whether SMART means anything much for consumer SSDs is an even more unclear thing; we have seen both SMART errors mean effectively nothing and SSDs fail abruptly with no notice. However, recently we did find a correlation that appears to be real for some of the SSDs in our fileservers and that one can tell a plausible story about.

We started out with Crucial/Micron 2 TB SSDs in our fileservers in a mixture of models (recent replacements have been WD SSDs). We've now had a (slow) series of failures of Crucial MX500s where all of the drives had a steadily rising count for SMART attribute 172, which for these drives is 'Erase Fail Count'. The count for this attribute starts out at zero, ticks up steadily for a while in twos and fours, and then starts escalating rapidly in the week or even day before the drive fails completely. Normal drives all have a zero in this attribute, unlike other SMART attributes. The exact number for the erase fail count hasn't been strongly predictive, which is to say that drives have failed with various numbers in it, but the increasing speed of the increase is.

(On drives with this attribute above zero, it's correlated with the values of attribute 5, 'Reallocated NAND Blk Count', and attribute 196, 'Reallocated Event Count'. The correlation doesn't go the other way; we have MX500s with non-zero count in either or both that still have zero for the erase fail count and don't seem to be failing.)

If this SMART attribute's name and value are both honest, there's an obvious story about why this would be relatively predictive. If a SSD is trying to erase a NAND block and this operation fails to work correctly, the NAND block is now useless. Lose too many NAND blocks and your SSD has problems, and if there is an escalating rate of NAND block erase failures this is probably a bad sign.

Regardless of what this attribute really means and how it works in Crucial's MX500s, it's reassuring to at least have turned up some SMART attribute that predicts failures (at least on those of our SSDs that report anything for it). This is how SMART is supposed to work but so often doesn't.

(Like most places, our ability to find useful SMART attributes or clusters of them that predict drive failure is limited partly because we don't have many drive failures. This limits the data we have available and how much value there is in spending a lot of time analyzing it.)

SMARTCanPredictForSSDs written at 23:13:04; Add Comment

2021-01-10

Thinking through why you shouldn't use plaintext passwords in authentication, even inside TLS

I recently read an article (this one, via) that advocated for just using plaintext passwords inside TLS for things like IMAP and (authenticated) SMTP. My gut reaction was that this was a terrible idea in general, but I couldn't immediately come up with a solid reason why and why other alternatives are better for authentication. So here's an attempt.

There are (at least) two problems with passwords in general. The first problem is that people reuse passwords from one place to another, so knowing someone's password on site A often gives an attacker a big lead on breaking into their accounts elsewhere (or for other services on the site, if it has multiple ones). The second problem is that if you can obtain someone's password from a site through read-only access, you can usually leverage this to be able to log in as them and thus change anything they have access to (or even see things you couldn't before).

The consequence of this is that sending a password in plaintext over an encrypted connection has about the worst risk profile for various plausible means of authentication. This is because both ends will see the password and the server side has to directly know it in some form (encrypted and salted, hopefully). Our history is full of accidents where the client, the server, or both wind up doing things like logging the passwords by accident (for example, as part of logging the full conversation for debugging) or exposing it temporarily in some way, and generally the authentication information the server has to store can be directly brute forced to determine those passwords, which can turn a small information disclosure into a password breach.

So what are your options? In descending order of how ideal they are, I can think of three:

  • Have someone else do the user authentication for you, and only validate their answers through solidly secure means like public key cryptography. If you can get this right, you outsource all of the hassles of dealing with authentication in the real world to someone else, which is often a major win for everyone.

    (On the other hand, this gives third parties some control over your users, so you may want to have a backup plan.)

  • Use keypairs as SSH does. This requires the user (or their software) to hold their key and hopefully encrypt it locally, but the great advantage is that the server doesn't hold anything that can be used to recover the 'password' and a reusable challenge never goes across the network, so getting a copy of the authentication conversation does an attacker no good.

    (If an attacker can use a public key to recover the secret key, everyone has some bad problems.)

  • Use some sort of challenge-response system that doesn't expose the password in plaintext or provide a reusable challenge, and that allows the server side to store the password in a form that can't be readily attacked with things like off the shelf rainbow tables. You're still vulnerable to a dedicated attacker who reads out the server's stored authentication information and then builds a custom setup to brute force it, but at least you don't fall easily.

    (OPAQUE may be what you'd want for this, assuming you were willing to implement an IETF draft. But I haven't looked at it in detail.)

As far as the practical answers for IMAP, authenticated SMTP, and so on go, I have no idea. For various reasons I haven't looked at alternative authentication methods that IMAP supports, and as far as websites go to do anything other than plaintext passwords that get passed to you over HTTPS, you'd have to implement some security sensitive custom stuff (which has been done by people who had a big enough problem).

PlaintextPasswordDanger written at 23:31:23; Add Comment

2021-01-07

I got to experience the march of storage technology today

Today, I noticed something when I was in the office and mentioned it on Twitter:

I was going to ask why I have a Micropolis 4743NS hard drive sitting around on my desk at work, but then I did a Google search on the model number and uh I think I may have answered my own question: IndyDown

(It was actually on a side table, but you know how it goes with tweets.)

The Micropolis 4743NS is a 4.3 GB 'narrow' 50-pin SCSI 3.5" hard drive from the mid to late 1990s (with a spindle speed of somewhere around 5400 RPM or so based on Internet searches (definitely not 7200 RPM)). The particular one from my office likely came from my SGI Indy that ran from 1996 through 2006, although I don't think the Indy started out with the Micropolis.

Another perfectly ordinary storage related thing I did today (which I didn't say anything about on Twitter) was that I grabbed and used a 16 GB USB 3.0 flash drive for the OpenBSD 6.8 installer image, because I needed one to install a couple of new machines and I didn't feel like burning a DVD (and we're running short on blank DVDs). The USB flash drive is a remarkably small thing, so small that I actually had some trouble fitting a label on it. Using a 16 GB flash drive for this purpose is massive overkill; the 6.8 USB install image is only 493 MB. But we didn't have any smaller flash drives sitting around that I trusted.

(We label all USB flash drives with what's on them for obvious reasons. Label tape and label makers are great things and every workplace should have a decent one.)

The USB flash drive is roughly four times the capacity of the Micropolis HD and much faster (both in the underlying flash storage and in USB 3.0). It also costs peanuts, so much so that the minimum size in this model has now moved up to 32 GB. I don't know what the Micropolis HD cost around 1997 or so when we likely bought it, but it definitely wouldn't have been small.

On the other hand, the USB flash drive would probably not last somewhere around a decade of 24/7 usage with no failures or bad sectors (although not with a high activity level; this was a workstation's HD). An actual SSD would probably do better, but I don't know if it would manage nearly a decade (although I can certainly hope).

(One reason I grabbed a 16 GB USB flash drive is that it's one of our more recent ones, and I have a reflex of not really trusting USB flash drives once they're clearly non-current. I could have tried a much older 4 GB USB 2.0 flash drive, but I didn't feel like taking the chance.)

Although narrow SCSI had a good run as a physical interface, it was already relatively obsolete by 2006. I might be able to find some way to connect the Micropolis up to my desktop and spin it up to see if it's readable, but it likely wouldn't be easy (if I have a PC SCSI card saved somewhere, it's certainly not PCIe). The USB flash drive is USB-A, which has had a long run in some form; although USB-C is sort of challenging it now, there are so many USB-A things out there that I suspect I'll be able to plug it in for two or three decades to come (if it's still working).

(PC internal drive physical interfaces have not been so long lived; IDE gave way to SATA, which may now be giving way to NVMe for many desktop machines. And my first PCs used SCSI drives, but that's another story.)

StorageMarchesOnLived written at 23:29:44; Add Comment

2021-01-05

TLS Certificate Authority root certificate expiry dates are not how trust in them is managed

One of the reasons I've seen people put forward for respecting the nominal expiry dates of CA root certificates (and not having ones that are too long) is that this is a way to deal with organizational change in Certificate Authorities, and in general in potential trust issues that come up over time. This idea feels natural, because limiting certificate lifetimes is certainly how we deal with a host of trust issues around end TLS certificates (and why very limited certificate lifetimes are a good idea). The only practical way to deal with a mis-issued or compromised end TLS certificate is having it expire, so we need that to happen before too long.

(Even if certificate revocation worked, which it doesn't, you would be relying on someone to detect that the certificate was bad and then notify people, even if it was embarrassing, a lot of work, expensive, or just a hassle. Heartbleed and some other famous TLS issues have convincingly demonstrated that a lot of people won't.)

However, continued trust in Certificate Authorities is not managed through certificate expiry in this way, and it can't be. The lifetimes of CA root certificates are (and have to be) far too long for that, and a compromised CA or root certificate is capable of doing far too much damage in even a short amount of time. Instead, browser and operating system trust in CAs is managed through ongoing audits, supervision, and these days also Certificate Transparency. The specific root certificates in root trust stores are in one sense simply a mechanical manifestation of that trust. You do not trust a CA because you have its root certificate; you have its root certificates because you trust the CA. If trust starts to be lost in a CA, its certificates go away from root trust stores or get limited.

(This hands-on, high-work approach to trust is only viable for CAs because there aren't very many of them.)

As one more or less corollary of this, I think it's fairly unlikely that Mozilla, Chrome, Apple, or Microsoft would refuse to add a new root certificate for a Certificate Authority that had one or more expiring but valid and accepted current root certificates. If they don't trust the CA any more, they would either remove it entirely or stop trusting certificates issued after (or expiring after) a certain date, as was done in 2016 with WoSign. Leaving in the current root certificate with full trust until it expires while refusing to add a new root certificate would be the worst of both worlds, especially since not all software even pays attention to the expiry date of root certificates.

With that said, concerns about the viable lifetime of public key algorithms and cryptographic hashes are reasonably good reasons to not let root certificates live forever. But in practice, if RSA or SHA-2 or whatever starts looking weak, people would probably again not rely on the expiry times of root certificates to deal with it; they would start making more fine-grained and explicit policies about what certificate chains would be trusted under what circumstances. It's very likely that these would effectively sunset current root certificates much faster than their actual expiry times (even for ones that expire earlier than the 2040s).

(There should be no worries about key compromise for root certificates because all root certificates should have keys held only in hardware security modules. This is probably even mandated somewhere in the audit requirements for CAs.)

PS: These days the requirement for regular, contiguous audits mean that a CA has to be actively alive and willing to spend money in order to remain trusted. Inactive or effectively defunct CAs will be automatically weeded out within a year or so, even if no one otherwise notices that they've stopped being active. I also suspect that people monitor Certificate Transparency logs to see what CAs aren't issuing certificates and flag them for greater attention.

TLSRootCertificatesAndDatesII written at 00:48:30; Add Comment

2021-01-03

TLS Certificate Authority root certificates and their dates

One response to the idea that Certificate Authority root certificates in your system's trust store should have their dates ignored (which is done in practice by some versions of Android and which you might put forward as a philosophical argument) is to ask why CA root certificates even have dates in that case. Or to put it another way, if CA root certificates have dates, shouldn't we respect them?

I think there are two levels of answers about why CA root certificates have dates. The first level is that 'not before' and 'not after' times are required in TLS certificates, and there is no special exemption for CA root certificates (this has been required since at least RFC 3280). As a practical matter, using well-formed and not to badly out of range times is probably required because doing otherwise risks having certificate parsing libraries rejecting your root certificate.

The second level is that more or less from the start of SSL I believe there was at least a social expectation that Certificate Authority root certificates would have 'reasonable' expiry dates. People could have generated root certificates that expired in 2099 (or 2199), but they didn't; instead they picked much closer expiry times. Some of this was probably driven by general cryptographic principles of limited lifetimes being good. Back in the early days of SSL (and it was SSL then), this didn't seem as dangerous as it might to us today because it was often a lot easier to get new root certificates into browser and operating system trust stores.

(Some of it may have been driven by fears of running into the Unix year 2038 problem if you had certificate expiry times that were past that point. Some modern CA root certificates carefully come very close to but not past this time, such as this Comodo root certificate, which has an end time that is slightly over three hours before the critical Unix timestamp. On the other hand, Amazon has CA 2, CA 3, and CA 4 that expire in 2040. And HongKong Post Root CA 3 expires in 2042.)

Given that root certificates have to have dates and even early ones likely faced various pressure to not make their end date too far into the future, the end dates of root certificates don't necessarily reflect the point at which one should end trust in a root certificate. Since there isn't general agreement about this, there probably wouldn't be enough support to introduce a special far off date to signal 'trust forever' and then explicitly treat all other dates as real cutoff dates.

(Mozilla may have a policy on the maximum validity of root certificates they'll include, but if so I can't find it. Their current report of CA root certificates in Firefox (see also) currently seems to have nothing after 2046.)

TLSRootCertificatesAndDates written at 23:20:15; Add Comment

2020-12-25

The expiry time of Certificate Authority root certificates can be nominal (or not)

The recent news from Let's Encrypt is Extending Android Device Compatibility for Let's Encrypt Certificates, which covers how Let's Encrypt is going to keep their certificates working on old Android devices for a few more years. The core problem facing Let's Encrypt was that old Android devices don't have Let's Encrypt's own root certificate, so to trust LE issued certificates they rely on a cross-signed intermediate certificate that chains to IdenTrust's 'DST Root CA X3' certificate (cf). The problem is that both this cross-signed certificate and DST Root X3 itself expire at the end of September 2021.

DST Root CA X3 expires in 2021 mostly because it was generated at the end of September 2000, and people likely thought that 20 years ought to be long enough for any root certificate (the CA world was a different place in 2000). The LE cross-signed intermediate certificate expires at the same time because you don't issue TLS certificates that expire after the certificate they're signed by. Well, normally you don't. The workaround Let's Encrypt came up with is to generate and have cross-signed a new version of their intermediate certificate that is valid for three years, which is past the expiry time of DST Root CA X3 itself.

(Multiple versions of a single certificate can exist because a certificate is only really identified by its keypair and X.509 Subject Name.)

You might wonder how this works. The answer is that Android in particular and software in general often treats root certificates rather specially. In particular, the validity dates for root certificates are sometimes essentially advisory, with it being enough for the certificate to be in the root 'trust store'. This treatment of root certificates isn't necessarily universal (and it's certainly not standardized), so it's possible for some software in some environments to care about the expiry time of a root certificate, and other environments to not care.

(For instance, as far as I can tell the standard Go TLS certificate verification does care about the validity times of root certificates.)

There is a philosophical argument that once you've made the decision to put a CA root certificate in the trust store, you shouldn't declare it invalid just because a date has passed. In this view, validity ranges are for certificates that can be replaced by the websites supplying them, which root certificates can't be. There's another argument that you should limit CA root certificate lifetimes for the same reason that you limit the lifetimes of regular certificates; things change over time and what was safe at one point is no longer so. Perhaps in another decade there will be general agreement over how software should behave here (and all software will have been updated).

(In practice, I believe that people making long-lived pieces of hardware and software that have to use TLS should demand and turn on an option to not enforce root CA lifetimes. People always stop making software updates after a while, and that includes updates to the list of trusted CA root certificates. But how to deal with TLS and general cryptography on systems that have to live without updates for 20 years or longer is something we haven't figured out yet.)

CertificateAuthorityRootExpiryMaybe written at 23:44:50; Add Comment

2020-12-09

A probable benefit to enabling screen blanking on LCD displays

A little while ago, I wrote about remotely turning on console blanking on a Linux machine, and in it I wondered if blanking LCD displays really mattered. The traditional reasons to blank your screen were a combination of CRT burn-in worries and the major power savings to be had from powering down a CRT display; LCDs mostly don't have burn-in, and they have low power usage. But today I realized that there is still a benefit to blanking out your idle LCD displays, or more exactly to getting them into power saving mode.

Standard LCD panels are backlit from a light source; originally this was one or more CCFL fluorescent lights, but these days most displays use white LEDs as the backlight. This backlight is always on if the panel is powered up and displaying any lit pixels at all, and I believe that it's still left on even if the panel is displaying all black (so that when some bit of the display switches to non-black, you don't have to wait for the backlight to come up). When a LCD panel switches to power savings mode, one of the things it does is turn off the backlight.

Turning off the backlight saves some power, which is nice, but it also lengthens the backlight's lifetime. Both CFL and LED backlights eventually dim or fail entirely, and how long this takes depends partly on how long they're on for. This means that powering down the backlight can lengthen the backlight lifetime, especially for panels that won't be in use for a significant amount of time.

(It's probably better for a backlight to stay on than to be toggled off and then back on frequently, but these days many office LCD panels are probably spending weeks or months in power saving mode. Mine certainly are.)

With that said, I don't know how long the lifetime is for typical LCD backlights, and thus how much this matters. Apparently it's common for LCD panels to be rated for 50,000 hours of operation before they fall from full brightness to half brightness, but this can be extended if you don't run your panel at full brightness to start with. If your (work) LCD display already spends half its time off (between evenings, night, and weekends), that's already over ten years.

(And these days many people's home LCD displays are seeing many more hours of usage per week than they were before, while work displays are often getting a nice long break.)

LCDBlankingBacklightWin written at 23:46:17; Add Comment

2020-12-04

Some thoughts about low power loads and power supply efficiency

I was recently reading another guide to power supply units (PSUs), which repeated the common advice that you shouldn't over-size your PC's power supply because its efficiency drops significantly at low loads. The general guideline I've read is that a PSU needs to operate at around half load for optimal performance, and it's common for a power supply's efficiency to plummet below 15% or 10% of its load. This time around, reading all of that made me suddenly twitch.

Both my home machine and my office machine have 550 watt PSUs, and they often draw only 40 to 60 watts. My home machine has to work hard to get about 150 watts of power draw, which is under 30% load. This means that I am definitely on the comparatively not great portion of the PSU loading curve.

(How bad it is is an interesting question. Both machines have the same PSU, certified as an '80 Plus Gold' efficient unit. Via Wikipedia, I discovered that certification reports are online here. This contains a test down to an input of 66 watts (giving 56 watts of output), at 84.6% efficiency. This seems not all that terrible to me.)

Does this matter? On the large scale of things, I think probably not. Almost all of the reported power draw from these machines gets turned into heat in the end (some of it turns into noise and air motion from fans). More or less PSU efficiency only changes where the heat is generated, and it may be better to generate the heat in the PSU if the PSU is better ventilated.

On the small scale of things, I would rather generate less heat in total while getting the same amount of work done as fast as before (at least during the summer). But at the same time, low power PSUs are apparently not a very popular market segment, so there aren't many options that are highly efficient (especially if you don't want to spend an arm and a leg). It's possible that my current PSU, oversized as it is, is still my most efficient option for machines that idle around 50 watts of power draw.

The corollary to this low load issue is that this means my power consumption numbers are not quite measuring what it looks like. When I measure an idle power draw of 66 watts for my work machine, that's the PSU's input power; it's actually outputting 56 watts to the motherboard, the CPU, and other components. If I changed the PSU to a higher efficiency one, the input power draw would drop although the power consumption of the components hasn't changed (and similarly for a lower efficiency PSU).

The measurement is fair and accurate in the sense that it's measuring the power usage of the entire system, including the PSU. But it means that I mostly can't make confident declarations about the relative power usage of various components of my machines, because the measured power draws are affected by both differences between PSUs and differences in PSU efficiency at various load levels.

(Since my home and office machines have the same PSU, one of these factors is eliminated in that comparison. But if I compare these machines to my previous set of numbers, the PSU is definitely a factor. My 2011 machines used the PSU supplied with the case, an Antec Neo Eco 620C, which was likely only 80 Plus Bronze and 82.7% efficient at a 10% load draw of 77 watts or so.)

PSULoadEfficiencyEffects written at 00:28:51; Add Comment

(Previous 10 or go back to November 2020 at 2020/11/23)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.