The marketers at a company that buys advertising do not get their bonuses because of effective, accurate ad measurement. They get their bonuses because their advsrtising "worked". How do they know it worked? By hiring a third party wo uses statistics to estimate an almost entirely unobservable thing: how many widgets were bought that would not have been bought in the absence of advertising.
Well guess what, there are many vendors that offer these "measurement" services, and when you debut a provably more accurate measurement methodology than your prior one, but the results are lower than the less accurate one? You lose business to companies that give better (i.e., higher) results.
This is easily solvable: ad measurement should be accountable to the CFO and not the CMO. The CFO cares if advertising wastes money; the CMO gets their bonus if the advertising "worked".
A few years ago, Freakonomics Radio [1, 2] had a very good 2-part overview of recent ad research done by the University of Chicago [3] and other academics. Their research, which is divorced from the same misaligned incentives I cited above found that ad effectiveness is vastly overstated. At that year's advertising research industry conferences, where some of the academics were invited to speak, they were pilloried by the research heads at some of the biggest people in the ad research industry.
It's all super fucked.
[1] https://freakonomics.com/podcast/does-advertising-actually-w...
[2] https://freakonomics.com/podcast/does-advertising-actually-w...
[3] https://www.chicagobooth.edu/review/why-power-tv-advertising...
I'll add a little more: the vast majority of statistical tools used in ad measurement are basic basic things that have zero hope of extracting causal relationships between ad exposure and behavior. Even many of the more advanced tools have issues with the fact that advertising is usually targeted in one way or another, thereby creating a bias in your test group.
Add to that the fact tbat advertisers commonly time their ad spending with times of peak seasonality or big sales, thereby creating all kinds of endogeneity problems gor a would be statistical researcher. Third, consider the walled gardens: so much advertising happens in them that it is nearly impossible to get a full view of whether your control group is truly unexpsoed to advertising. Fourth, there are vast difficulties in linking online behavior to brick-and-mortar purchase outcomes. Finally, consider the notion that the advertisements qnd advertised products themselves change quite frequently: the results of one ad study are not necessarily applicable to future behavior.
As i said: we really have no idea beyond very basic notions what truly works and what doesn't. Some advertising very likely works; others very likely dont.