> One company holding the keys hurts personal choice for everyone.
I understand why you like to hate on Microsoft (they have a long track record of playing dirty), but the actual keys that are preloaded into hardware that ships with UEFI are ultimately the choice and responsibility of individual OEMs (Lenovo, HP, Dell, etc etc), and some of them are directly accountable for major screw-ups in this area - while others ship systems preloaded with a free OS, and go the extra mile to verify that you have the means to install your own. Microsoft could give zero fucks about cooperating, but rather than making this an impossible problem to resolve between every individual OEM and every individual distro/OS, they chose to sign a shim, so that everyone can play with everyone. I do not dismiss this as a possible threat vector, but please consider the wider picture.
What I don't understand is why you're hyperfixating on hating Microsoft (which, 13 years in, still haven't made an aggressive move in this area), while Intel[0] puts an entire dedicated core, with its own -completely opaque and unauditable- OS, network interface, and a long track record of security holes, into every CPU they've shipped in the last 15+ years, with no user choice/control over that whatsoever.
[0]: https://en.wikipedia.org/wiki/Intel_Management_Engine
> [...] because I need a different level of trust than you.
Do you trust your CPU vendor - Intel? AMD? Apple? Qualcomm? Broadcom? Any other piece of silicon (hint: PCIe) that has unrestricted R/W access to your entire RAM? (Or did you even check if your system has an IOMMU, let alone who made it, how it's configured?)
I'm not dismissing the issue you're hyperfixated on, but the points you're raising are irrelevant in light of much more direct threats. You can't trust the software if you can't trust the hardware.
"Reflections on trusting trust" by Ken Thompson[1] is a 40yro classic, we are a looong way from that even if you dismiss hardware entirely and only consider trivial software-only supply chain attacks[2], and yet all you can see is the source code.
[1]: http://genius.cat-v.org/ken-thompson/texts/trusting-trust/
[2]: https://research.swtch.com/nih
My own need for trustability includes the need to continue trusting my laptop after I've left it unattended for one minute. SecureBoot&co is currently the most practical way to even detect boot chain tampering. Evil Maid[3] has been described 15 years ago - this is centuries in the black hat world, and free software developers (yes - you and me) are the most valuable targets, because of our work's potential far-reaching impact on the community.
[3]: https://en.wikipedia.org/wiki/Evil_maid_attack
If you develop software, and dismiss this class of problems, you become a liability to your users and/or employer - they can no longer trust you.
> And again you conflate software freedom with personal freedom.
I do not conflate them, I recognise software freedom as an aspect of personal freedom - but ultimately it is your own personal choice, which freedoms do you value the most. The vast majority of people using FOSS are anything but interested in compiling their own bootloaders/kernels, because we don't do boot-chain development work and instead we want this part of the OS to be stupid, simple, reliable, and secure, so that we can be free to focus on our actual work.
The "stupid, simple, reliable, and secure" part is the very thing that's missing from the entire Linux ecosystem and why I'm usually a vocal opponent of everything-Poettering, choosing to run OpenBSD where I can - their FDE[4] is orders of magnitude simpler/easier to audit than the bloody mess that is UEFI-shim+GRUB+Linux+initrd+cryptsetup. Again, if you actually cared, you would be advocating for software that is easier to audit. Source code that you can't read/comprehend is no better than a binary blob.
[4]: https://www.openbsd.org/faq/faq14.html#softraidFDE; the entire disk decryption code fits directly into the bootloader, thus even the kernel is encrypted.