GPT-4 might not be as smart as people but it's performing as well as humans on many kinds of AP tests, medical certification tests, bar exams, SATs, GREs, etc. You can see it on the tech report and the sparks of agi papers. I guess the author thinks those tests are bad, or he didn't read about those results, or he thinks those papers lied about the results, or he is being willfully misleading, I don't know which one.
I don't know of the author (maybe he is famous or not idk) but it looks like maybe he is promoting some competitor to GPTs "I’m proposing a new machine learning meta-architecture for learning forward models. The architecture is called Predictive Vision Model (PVM)."
Any exam you can study for is a bad test of its ability to reason through novel situations. AP tests are not a great test for this purpose.
Oh come on, how can a vision model be a competitor for a language model. That's a very tenuous leap of logic. I'll stop short of calling it motivated reasoning, but its certainly biased reasoning bordering on rationalizing.
1. It appears that scaling these models will give us such high accuracies that it will solve the problem.
That’s just what I feel seeing ChatGPT and Meta’s Segment Anything