> I'm not talking about any imaginary research breakthroughs. I'm talking about today, right now.
I explicitly said so was I. I said today we don’t have large impact societal changes that people have conventionally associated with the term AGI. I also explicitly talked about how I don’t believe o3 will change this and your comments seem to suggest neither do you (you seem to prefer to emphasize that it isn’t literally impossible that o3 will make these transformative changes).
> If however you're "this model passed this benchmark I thought would indicate AGI but I don't think it's going to be able to do this or that so it's not AGI" then I'm sorry but that's just nonsense.
The entire point of the original chess example was to show that in fact it is the correct reaction to repudiate incorrect beliefs of naive litmus test of AGI-ness. If we did what you are arguing then we should accept AGI having occurred after chess was beaten because a lot of people believed that was the litmus test? Or that we should praise people who stuck to their original beliefs after they were proven wrong instead of correcting them? That’s why I said it was silly at the outset.
> My thoughts or bets are irrelevant here
No they show you don’t actually believe we have society transformative AGI today (or will when o3 is released) but get upset when someone points that out.
> I'm just not interested in engaging "I think It won't" arguments when I can just wait and see.
A lot of life is about taking decisions based on predictions about the future, including consequential decisions about societal investment, personal career choices, etc. For many things there isn’t a “wait and see approach”, you are making implicit or explicit decisions even by maintaining the status quo. People who make bad or unsubstantiated arguments are creating a toxic environment in which those decisions are made, leading personal and public harm. The most important example of this is the decision to dramatically increase energy usage to accommodate AI models despite impending climate catastrophe on the blind faith that AI will somehow fix it all (which is far from the “wait and see” approach that you are supposedly advocating by the way, this is an active decision).
> My bet ? There's no way i would make a bet like that without playing with the model first. Why would I ? Why Would you ?
You can have beliefs based on limited information. People do this all the time. And if you actually revealed that belief it would demonstrate that you don’t actually currently believe o3 is likely to be world transformative