For the ATARI, the "real world" is the present frame, and a fixed set of 4 buttons and 4 directions. This of course is the game pre-programmed into the ALE ROM.
You can take any action, and get the next frame. but you cant "undo" an action, and you cant restart a game from a fixed state (see the Go-Explore controversy). And you cant explore 4 different actions in an interesting frame.
So now, if you learn a network which predicts the next frame, you can enter the world of model-based learning, where we do a simulated move tree roll-out (i.e. not calling the ATARI), try a gazillions moves, and only then select an action and get the next sample.
In a formally defined synthetic domain such as chess or logic programming, it is not clear whether this is helpful. We are simply trading one cpu time (calling the environment) for other cpu time (running our own learned im-precise model of the environment)
Of course DM has a chess function which does codes the rules of the next move. It can return a LOSS if you try an illegal move. But this function is NOT called for the tree roll out.