They can't get an unknown image and classify it as real or deepfake.
The technology that's being discussed here is just taking a hash of the image at the point when it's created and using a third party service and standard cryptography to authenticate it.
You can be sure that the image was taken using a company's app because the image hash was signed with their key when it was taken, and you can be sure that it hasn't changed since then because they know that a photo with that exact signature was once taken with their app.
Now, let's be honest, that's almost certainly the most sensible way to counter fake imagery - if you want someone to believe a photo is real, prove who took it and when using the same technology that's used to secure your bank account.
However, the implication in all of these overly-hyped articles about the products is that it's some kind of "war of the robots" in which they're trying to train software to spot changes and hollywood-esque "inconsistencies".
That's really unhelpful for two reasons: - it makes anyone even vaguely technical deeply distrust anything they say, since it's not a process that could come out with a reliable product (any ML algorithm like this can be gamed, and the whole point here is absolute reliability) - it implies that this process can be used on any photo and so you could identify that something was fake if it just appeared on the internet, that's not the case here
Here are the two companies, for reference: https://truepic.com https://www.serelay.com
edit: gus_masa made the same point rather more concisely