>they aren’t focused on appealing to those biases, but driven by them, in the that the perception of language modeling...
So yes in effect that is their point, except they find the scientists are actually compelled by what markets well, rather than intentionally going after what markets well... which is frankly even less flattering. Like researchers who enabled this just didn't know better than to be seduced by some underlying human bias into a local maxima.
We all have biases in how we determine intelligence, capability, and accuracy. Our biases color our trust and ability to retain information. There's a wealth of research around it. We're all susceptible to these biases. Being a researcher doesn't exclude you from the experience of being human.
Our biases influence how we measure things, which in turn influences how things behave. I don't see why you're so upset by that pretty obvious observation.
> Arguably, it is the other way around: they aren’t focused on appealing to those biases, but driven by them, in the that the perception of language modeling as a road to real general reasoning is a manifestation of the same bias which makes language capacity be perceived as magical
There's no charitable reading of this that doesn't give the researcher's way too little credit given the results of the direction they've chosen.
This has nothing to do with biases and emotion, I'm not sure why some people need it to be: modalities have progressed in order of how easy they are to wrangle data on: text => image => audio => video.
We've seen that training on more tokens improves performance, we've seen that training on new modalities improves performance on the prior modalities.
It's so needlessly dismissive to act like you have this mystical insight into a grave error these people are making, and they're just seeking to replicate human language out of folly, when you're ignoring table stakes for their underlying works to start with.