> Propose to share a proportional number of correctly predicted examples for every incorrect example they come up with?
This is what I have been asked to do the whole time. And yes, like you said it's not very efficient. It would just make them think of the incorrect result as something I need to "improve".
Other team members did shrug it off and simply add those incorrect example as label for the new iteration of model, which did the job and receive high praise. But in my opinion, that just seem like poor AI work ethic.
> The whole thing seems like a communication/political problem, not a technical one, and it's hard to give advice when we don't know the specifics.
Thanks, after thinking about it for a whole day, I guess the correct action is to improve my communication skill, and present the pros and cons of my approach better. I just found out that it is very hard to explain machine learning term without proper visualization.