An LLM takes a slice of data from the world, by nature it has to organize it in some such way, depending on how its trained, and the method of organizing it is hard-coded into the model. Therefore, all models will develop some sort of style, no matter what, since somebody, or a team of people, had to figure out a way to portion out a selection of data, and this problem is intractable.