== Hardware Security Modules are just boxes running opaque and probably flawed software The news of the time interval is that [[researchers discovered a remote unauthenticated attack giving full, persistent control over a HSM https://cryptosense.com/blog/how-ledger-hacked-an-hsm/]] ([[via https://inks.tedunangst.com/]]). When I read the details, I was not at all surprised to read that one critical issue was that the internal HSM code implementing the PKCS#11 commands had exploitable buffer overflows, because sooner or later everyone seems to have code problems with PKCS#11 (I believe it's been a source of issues for Unix TLS/SSL libraries, especially OpenSSL). (The flaws have apparently since been fixed in a firmware update by the HSM vendor, which sounds good until you remember that some people deliberately destroy the ability to apply firmware updates to their HSMs to avoid the possibility of being compelled to apply a firmware update that introduces a back door.) There is perhaps a tendency to think that HSMs and hardware security keys are magic and invariably secure and flawless. As this example and [[the Infineon RSA key generation issue ../sysadmin/KeyGenerationAndHSMs]] demonstrate quite vividly, ~~HSMs are just things running opaque proprietary software that is almost certainly not as good or as well probed as open source code~~. Proprietary software development is not magic, any more than open source development is, but open source code has the advantage that it's much easier to inspect, fuzz, and so on, and if a project is popular, there probably are a number of people doing that. The number of people who will ever apply this level of scrutiny to your average HSM is much lower, just as it is much lower with most proprietary software. This doesn't mean that HSMs are useless, especially as hardware security tokens for authenticating people (where under most circumstances they serve as proof of something that you have). But I have come to put much less trust in them and look much more critically at their use. For server side situations under many threat models, I increasingly think that you might be better off building a carefully secured and sealed Unix machine of your own, using well checked open source components. (Real HSMs are hopefully better secured against hardware tampering than any build it yourself option, but how much you care about this depends on your threat model. An entirely encrypted system that is not on the network and must have a boot password supplied when it powers on goes a long way. Talk to it over a serial port using a limited protocol and write all of the software in a memory safe language using popular and reasonably audited cryptography libraries, or audited tools that work at as high a level as you can get away with.) PS: The one flaw in the build your own approach in a commercial setting is that often security is not really what you care most about. Instead, you may well care most about is that [[it's not your fault if something goes wrong WhyPeopleGoCommercial]]. If you buy a well regarded HSM and then a year later some researchers go to a lot of work and find a security flaw in it, that is not your fault. If you build your own and it gets hacked, that is your fault. Buying the HSM is much safer from a blame perspective than rolling your own, even if the actual security may be worse. (This is a potential motivation even in non-commercial settings, although the dynamics are a bit different. Sometimes what you really care most about is being able to clearly demonstrate due diligence.)