My take is that proving authenticity might not be something we can do with any degree of accuracy in a general sense. So if that is infeasible, then we need _some_ kind of mitigation. Something like CAI allows us to make the an assessment about the how much trust to give an informational source, probably taking into account multiple factors (known exploits in the source device, reputation of originator and what claims are be attached in the metadata). This might allow me to accept that a given video originated from a local tv station, rather than tiktoker edit, but I still need to asses if that station used genAI or has been compromised or whatever else. But that seems a much narrower reputational problem, that also will be contextual.