I completely agree with this - but it's exactly this dynamic that I think, at least in the current academic environment, does more harm than good. Effectively it normalizes publishing a result that's not strong enough to swamp the prior, but where you have some detailed situational argument for why a different prior should be used here. We already get every social science paper arguing that they should be allowed to use a 1-tailed t-test rather than 2-tailed because surely there's no possibility that their intervention would do more harm than good, and you need to get into the details of the paper to see why that's nonsense; letting them pick their own prior multiplies that kind of thing many times over.