1
Curious what Q/A checks you use, particularly with projects tackled by a team, to avoid overstating the applicability of findings - maybe the equivalent of a "stage/gate" process (or some other approach) to ensure the team is staying on mission?
On a related topic - how do you keep track of compounding error in a layered analysis with many assumptions and several iterations of sample subsetting?
Would love your thoughts on this, in an article I am writing. The topic is how organizations and individuals can apply objectivity and critical thinking THROUGHOUT a project to ensure they're making reliable conclusions and recommendations.
Looking forward to your suggestions!