This is the most common issue I find, even with the latest models. For normal logic it's not too bad, the real risk is when they start duplicating classes or other abstractions, because those tend to proliferate and cause a mess.
I don't know if it's the training or RL or something intrinsic to the attention mechanism, but these models "prefer" generating new code rather than looking around for and integrating reusable code, unless the functionality is significant or they are explicitly prompted otherwise.
I think this is why AGENTS.md files are getting so critical -- by becoming standing instructions, they help override the natural tendencies of the model.