GPUs (I’m told) have far fewer instructions to emulate than a CPU, so I’d think that low level emulation of the Flipper shaders would be no trouble. Can’t translate or transpile them to PC GPUs though because those instruction sets are somewhat secret, I think.
I know nothing about this stuff but I am a developer so perhaps I know enough to ask the most stupid questions possible.
It’s gotta be a performance thing, why they didn’t emulate Flipper at a low enough level to use the precompiled shaders directly.
The GPU ISAs are known (e.g. the PTX compiler for NVidia is open source and has a backend in LLVM). The main problem is that the GPU ISA changes with every GPU hardware generation and manufacturer, so if you want to support Nvidia 3xxx + 4xxx + AMD VLIW + AMD GCN + ... you have to use the common demoninator GLSL/HLSL/SPIR-V/whatever.
> why they didn’t emulate Flipper at a low enough level to use the precompiled shaders directly.
They did. Originally the GPU emulator was done in the CPU, and in 2017, the GPU emulator itself was moved into a shader ("ubershader").
The console game itself does not include shaders in text format like many PC games do.
PTX is only and IR afaik, kinda like SPIRV. It also goes through another compiler in the driver so doesn't really help here
(Why "shaders" in quotes? Because they weren't shaders as we know them today but really more like lists of hardware flags for how to flow data through a fixed function pipeline)
I mean technically you can, but it generally requires a bunch of inefficient jump tables, or alternatively a way to fall back to an interpreter or JIT for self modifying code.