I don't feel like I'm contradicting myself there. Yes, the scenario of pranks would be in scope for the overall system or framework, sure. Pointing it out as a leakage / flaw of the proposal by Apple and Google is counterproductive though in my mind since a) it can be easily tackled in those other parts of the framework and b) we don't even have a specific single framework to talk about on that particular matter so it makes little sense to spread FUD about it.
> But it very much sounds like this is also a very important part of the protocol then.
That might be arguing semantics honestly, the protocol as published suggests restrictions that are beneficial to the end user's privacy, sure. It otherwise does not dictate any particular government, country, or region where the keys are supposed go in case of a positive test results or how they should be verified / handled. That in my mind would again fall into the category of the overall framework that we do not have. What we have is a manual system that is ineffective and hard to scale. What this adds is a privacy aware method to tackle a tiny part of a digital supplement to this manual system.
That's why I'm so insistent on the in scope / out of scope, sorry if that comes across harsh but I don't feel it's particularly productive to construct hypothetical overall threat models based on this very limited technical proposal. Scenarios such as malicious distributions of tests are much better looked at in the context of a full framework proposal. I can come up with dozens of threat models that include unrelated things, that doesn't mean it's particularly responsible to share those imho. We're the technical audience that can grasp this, pointing out potential shortcomings is fine but they should be grounded in reality.