We've experienced this scenario a few times, where someone used AI generated code which had significant bugs or mistakes.
- Telling them about the mistake is not as helpful as you think. It's the same as taking an exam, getting something wrong, and looking at the answer key immediately. You feel like you learned something, but it's not as strong, and you're more likely to make the same mistake in the future. Doing things by hand, although painful, is still a strong check on your understanding of the problem.
- It's very frustrating for the person reviewing the code. Reading someone else's code is not easy, esp when they can't articulate what they were aiming for, or there's a big discrepancy from what they think they wrote vs what's actually written. In many cases, the reviewer is thinking "Why did I even bother with this? I should've just vibe-coded this myself. At least I know what should've been done. This isn't even worth it for mentoring purposes, because they did't actually learn anything."
- Using threat/punishment based incentives create a lot of bad vibes and culture. People are less likely to talk about mistakes and spend more time thinking about how to hide their mistakes.
- Eventually, the consensus converges to the position that the best way to learn is to do things manually and it's better when junior folks didn't rely on these tools from the beginning. It's important to explain to them why or else it can seem hypocritical when more experienced people are allowed to use LLMs more freely.