The secure boot problem
In theory, UEFI secure booting has the straightforward goal of stopping boot time malware, malware that compromises your machine before Windows boots and thus before any of its protections can kick in (such malware already exists, although it's not very common). In practice, secure boot requires that all privileged code your machine ever runs be signed. Your bootloader must be signed, your operating system (Windows or otherwise) must be signed, your hardware drivers must be signed. Wait, what? How did 'prevent boot time malware' turn into 'only run signed code'?
The core problem with all secure boot schemes and with this general goal of blocking boot time malware is that the OS has no way to be sure that it was booted securely. There is no way for Windows (or any other OS) to reliably detect that it was booted in a compromised environment or by something other than the official boot system and throw up a big warning when it starts to the effect that you're in trouble. If an attacker has control of the machine, they can construct a fake boot environment that lies to the target OS and says 'honest, you were booted securely, everything is fine, I am not boot time malware'. At that point it is game over.
(I'm not convinced that you can get around this in practice even with hardware support.)
This means that secure boot can never allow the attacker to gain control of the machine through any path, even a long one. A bootloader allows control of the machine, so you can only run approved, presumed secure bootloaders. An operating system allows control of the machine, so bootloaders can only run approved operating systems. Kernel level drivers allow control of the machine (if you abuse them), so operating systems can only allow approved drivers. Direct hardware access to some hardware allows you to take control of the machine (for example by programming DMA to overwrite bits of the OS), so operating systems can only allow that access to approved programs. And so on. Thus we wind up with secure boot requiring that all privileged code be signed, all the way down the line from the bootloader to graphics drivers.
Any opening in this chain of trust allows an attacker to slip in, intercept the process, take over the machine, and boot the target OS in a malware infected environment. If they can slip in early enough, you're unlikely to notice that your machine takes a few seconds longer to boot than before because it is actually booting a carefully configured minimal OS install for something that is willing to run an unsigned driver and the 'driver' is then taking the machine over to start your real OS.
Of course, the corollary of this is that signing things is not really good enough to keep attackers out. It would only be good enough if the signed things had no vulnerabilities that attackers could exploit, but of course they are going to have vulnerabilities and they're going to get compromised. In theory signing things allows things to be de-approved after the fact when they are found to be vulnerable; in practice, well, there's all sorts of potentially explosive issues.
PS: how this interacts with virtualization makes my brain hurt. In theory I think that all virtualization systems (whether or not they require special hardware privileges) are part of the trust chain and so have to be signed. I have no idea how you enforce that.
Sidebar: the theoretical way around this with hardware support
What you need is a piece of hardware that cannot be faked, can be irreversibly disabled by the system, and is essential to boot your OS. The obvious implementation is to have a crypto processor with preloaded keys that is used to decrypt some portion of your OS. At the point where the boot system transitions out of secure booting, it tells the crypto processor to flush the preloaded keys; if your OS is booted after that point, it will be unable to decrypt portions of itself and won't run. Malware is presumed to not have the keys, so it cannot reload them into the crypto processor.
(I'm engaging in a certain amount of handwaving here about how the keys would work.)
One pragmatic difficulty with this is the question of what prevents the malware from simply providing the necessary decrypted material directly. Almost all of your OS's code and data is not system dependent and cannot be tied to a particular machine so the malware can simply carry around a generic copy of the decrypted, live version of anything that normally comes encrypted. My instinct is that it's hard to have system dependent material that is really crucial and cannot be quietly substituted or patched.