No language model
plays a song either in the narrow sense, they just send a representation of the song to some other program (or human) that might play it.
Mere knowledge is irrelevant only because we don't (yet) have a mechanism to pry open one's brains and inspect the copying of songs within different parts of one's brain. Otherwise, mechanistically, besides one using silicon and other using wetware, they're pretty much doing the same thing.