It's because our bandwith and monkey brains are so slow that we're forced to operate at a level of semantics. We can't just make inferences from almost infinite amounts of data the same way we can't play chess like Stockfish or do math like a calculator. The dualism is precisely in the opposite view, that computation is somehow "substrate independent". Searle argues we can have AI that has understanding the way we do, just that it's going to look more like an organic brain as a result.
The important insight from LLMs is that they're not like us at all but that doesn't make them less effective or intelligent. We do have plenty of understanding, we need to because we rely on a particular kind of reasoning, but artificial systems don't need to converge on that.