Are they ? The article only mentions "black box" a couple of times, and seems to be using it in the sense of "we don't need to be concerned about what's inside".
In any case, while we know there's a transformer in the box, the operational behavior of a trained transformer is still somewhat opaque. We know the data flow of course, and how to calculate next state given current state, but what is going on semantically - the field of mechanistic interpretability - is still a work in progress.