Specifically, MuZero uses MCTS and MCTS needs to have at the very least a move generator in order to produce actions that can then be evaluated for their results. The trained MuZero model learns the transition function and evaluation function but I don't see in the paper where it learns what actions are legal in the domain. And I don't understand how any architecture could model the possible moves in a game without observing examples of external play (i.e. not self-play).
MuZero reuses the AlphaZero architecture so most likely the moves of the pieces for Chess, Shoggi and Go are hard-coded in the architecture, as they are in AlphaZero. There's also probably some similar hard-coding of Atari actions, which I'm probably missing in the paper.