Agreed, Friston's Bona Fides are impressive. (Aside: His fame in neuroscience comes from him having written important FMRI software that everybody cites.)
That's also why I worked with his team and read a lot of his papers for a while.
His principal idea was originally that neurons perform free energy minimisation. This idea makes a lot of sense, once you understand what free energy means. But, to the best of my knowledge, it has not at all been empirically verified for neurons (I'd be delighted to be proven wrong in this belief). So he went the route of generalising the free energy principle: "the free energy principle asserts that any “thing” that attains a nonequilibrium steady state can be construed as performing an elemental sort of Bayesian inference".
Terms
"can be construed"
and "elemental sort of Bayesian inference" do a lot of work here. Updating and generalising one's research hypothesis is legitimate (albeit one could be more explicit about this), but it weakens the claim being made. Anyway, under a charitable interpretation of those terms, I agree that this is
true, but, at the same time doesn't say much. Indeed, under the
charitable interpretation it basically equates doing free energy
minimisation with existence. Friston has lately said that the FEP is
not falsifiable.
Take it from the horse's mouth (i.e. a Verses employee): "the free energy principle just applies to stones, it applies to birds, it applies to any kinds of animals" on Machine Learning Street Talk [1].
Here is my current position: from a mathematical principle this general one cannot derive scalable ML algorithms!
> he seems to have joined only in 2022, they were 4 years old at the time.
The company founders have a cryptocurrency and (later) metaverse background.
[1] https://www.youtube.com/watch?v=qxEfcrmTWO4