Hi, I am having problem explaining to my team members and employer the efficiency of my ML models.
People in software companies tend to think that since machines can perform perfect logical and mathematical operations, anything run within them will also inherit this property, including AI.
Currently no matter how hard I try to improve my AI model (generating more data, applying different augmentation, model architecture,..etc) they will still trying to find input data that proved my model is not "generalized" enough.