Also, I did this around 3-4 years ago. It works, but once you have it set up its basically the same as if you had two computers effectively on your desk with a kvm switch in software. It also has a tendency to be unstable as all sin and some iommu isolated hardware may misbehave when assigned to a virtual machine.
It's much simpler to just have a second PC/laptop or dual-boot (less secure).
Maybe a viable option is to hot swap your drives, and use something with firmware you can sign personally and verify on boot.
IOMMU also grants the guest hardware access to the CPU, although it does have to be shared between the host and guests.
There shouldn't be any risks to that if your main OS is encrypted and the keys are sealed by a TPM.
It's a hassle, mostly because you need to disable the GPU from the linux host; before passing through; which means you need a second GPU to power the linux host (integrated GPU is fine).
Then there's a bunch of config regarding IOMMU groups and other shit to make sure it picks it up fine, and when it finally does you get 90-95% of the performance for average FPS and then 60-70% min-fps (spikes are way worse).
Also, it helps to use a recent AMD card and the in-tree amdgpu driver instead of the out-of-tree nvidia driver.
Overall, you trade software problems for hardware problems (UEFI firmware versions can break the setup), but if you get it working it works great.