>If Asimov's laws don't work, that doesn't mean we can ignore the idea of them and... Just do nothing.
I don't think anyone is suggesting to just do nothing because Asimov's laws won't work, so much as suggest that people consider why they wouldn't work, and what that means for the problem of AI alignment in the real world.
It may simply be inevitable that AI (if we're defining AI as something like an LLM) can always be talked into or out of anything, given the right prompts. In which case constraints can only work so far and we need to consider what happens when they inevitably fail.