FWIW, Gary Marcus wrote an entire book[1] that, from what I can tell, purports to do that. I have not read the book (yet) so I can't personally attest to how well it meets that goal, but at least be aware that it exists.
[1]: https://www.amazon.com/Rebooting-AI-Building-Artificial-Inte...
In any case, it was clear that he doesn't care at all about the "Open" part of OpenAI and that he sees a future where everybody use OpenAI services rather than having their own, powerful private models. I'd say our interests are very much in opposition.
But the worst to me, by far, is to make OpenAI a for-profit organisation that does not release the models. More than that, he ordered to kill official support for other projects like OpenGym and others. The impact of this change does not matter. It is dishonest and possibly has some level of illegality in it. He is literally surfing the hype as did the crypto guys.
I do not think Gary Marcus has anything interesting to say about current AI, and by that, I mean anything that's not a cheap gotcha, restatement of an obvious fact, or something entirely disingenuous.
This is a problem I have with a lot of people in the AI safety or criticism space. There's a lot to criticize LLMs and AI for. There are a lot of real and concerning things that can cause real world harm, with systems we have __now__. But attacking him or any of the AI doomer/X-risk stuff just muddies the waters and makes those real conversations near impossible to have. Just primes people the wrong way. I'm not a conspiracy theorist but I'm not surprised by people that think these two groups are on the same side. Just seems like a lot of attention seeking and we can fight the hype without creating a different kind of hype...
He and his orbiters have done real an irreparable harm to the Internet while profiting enormously from unlicensed copyrighted works (aka piracy).
"Open" "AI" is pure hypocrisy, an embodiment of the two parallel sets of rules: one for the commons and one for the elite.
Sama deserves more than a little pushback on and scrutiny of his claims.
He and his orbiters have done real an irreparable harm to the ecosystem by pushing such intense compute systems unthinkingly.
Accepting to do deals with people who have no shame for pulling such trick is to accept they will do it again in a much bigger scale.
AI-as-meme has been around long enough now that its being force-interpreted into two camps, "its all a scam, at best it only knows how to reproduce exact training data" and "it's glory shall have us working 0 hours by 2030". This article won't shed light on reality, the in-between.
ex. lets walk through the opening:
1) "How do you convince the world your ideas and business might ultimately be worth $7 trillion dollars" -- he's referring to an unsourced rumor, that never made any sense, that Mr. Altman approached Saudis for $7 trillion to build GPUs. Even the nonsensical gossip rag rumor explicitly has nothing to do with OpenAI, part of the "intrigue" was the perfidy of Sam doing it separate from OpenAI.
2) "Sam Altman is on a tour to raise money and raise valuations...at some of top universities in the world", figuratively, maybe, but not actually - they just raised a round in December and this isn't a company that needs to do PR at universities to catch investor attention.
3) "A few days ago at Stanford, Sam promised that AGI will be worth it, no matter how much it costs" -- actual quote: "I don't care if we burn $50 billion a year, we're building AGI and it's going to be worth it" -- that's not a "promise" nor "no matter how much it costs" -- yes, $50B is functionally 'no matter how much', so I'd give Gary charity of interpretation on that too - except as long as we're doing that, why are we over-reading Sam?
I'm flagging this. It's not because I give a hoot about sama but because this kind of crap is posted only to lead to endless discussion.
Get to work. Focus. Whatever the hell Sam is can wait for another day.
We have ignored the impact of what we do far, far too long. Forums like this facilitate this false dichotomy. Society IS tech. To believe otherwise is stupid.
But I have found a mantra: if our conversation is in any way getting into whether somebody is good or bad? I am no longer a positive influence on society.
This failed my test. It has nothing to do with Sam Altman. If you want to talk about X person good or bad, go the hell else. I have actual work to do.
Sam A's job is to sell Open AI. It's not not interesting that he talks about why is so great!
There's nothing substantive in this article. The author expresses a seething hatred for OpenAI's CEO by repeatedly lampooning ("where's the evidence!?") brief quotes, without context, of Altman's opinions/predictions. I hope that the author escapes whatever rut made him so overwhelmingly bitter and resentful. I've been there, and it truly stinks. :/
- Precipitated a new class of computationally intense and expensive systems at a time when we desperately need to be focused on sustainability and reducing power demands/increasing efficiency.
- Devalued human labor without being high enough quality to truly replace it.
- Grossly violated an unspoken social contract of the internet by abusing the commons, leading to many people locking down their content.
- Flooded the internet with unverifiable noise that looks credible, making it harder to find high quality information and enabling scams and laziness.
At best this productivity tool is of neutral value to society. If you think it's a net good, I question your judgment.
- If something is not replacing labor, it will not devalue it. In this case, GPT4 is replacing human labor, which is indeed devaluing those specific labors. But it is also unlocking new potential, which in the history of all technology has always been a net positive in the end.
- I wasn't aware of this unspoken social contract before OpenAI existed, but maybe I'm just ignorant.
- This is true, but this is also a trend that has been headed downward for a long time thanks to Google/SEO. The signal-to-noise ratio has indeed gone down due to GPT-powered blog spam, but honestly we needed to get our act together before GenAI anyway. This is actually lighting a fire under people's butts to find ways to avoid the AdSense/affiliate marketing fueled drivel.
> Precipitated a new class of computationally intense and expensive systems at a time when we desperately need to be focused on sustainability and reducing power demands/increasing efficiency.
Microsoft are building over 10GW of firmed renewables which will partly power their AI datacenters. It is not all greenwashing and cynicism. As far as emissions go, GPT should not be put in the same basket as truly wasteful sectors like beef or crypto mining. Everything takes energy, including productivity tools like GPT. The focus should be on sustainable growth and scaling firmed renewables, which is the only politically realistic way out of the climate crisis. Degrowth can't work, either on a political level, or on a company level given the competitive capitalist system that they exist in. I find this[3] is a good discussion on that topic. > Flooded the internet with unverifiable noise that looks credible
I agree with you on this. It's not all rosy, but let's not let social media off the hook. LLMs have not made long-form journalism worse, for example. The problem is the interaction of LLMs with incentives created by SEO algorithms and social media.[1] https://globalnews.ca/news/10463535/ontario-family-doctor-ar...
[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5478797/
[3] https://podcasts.apple.com/ca/podcast/is-green-growth-possib...
I'm not saying it's not cool tech or that the current level of LLMs don't have some impact, but "the world is so much better" is quite a stretch (unless you are a Nvidia shareholder)
Not to mention the adverse affect of job loss in industries such as customer service almost exclusively hurting people from poorer countries and lower socio economic levels.
I guess it’s good at generating plausible blog spam and helping children with homework. I’ve used it to bootstrap my own writing. It’s not entirely useless but hardly world changing.
I think the biggest commercial use right now is Klarna uses it for basic lvl 1 support? I don’t know the details but it sounds like a good result from RAG over a fairly constrained corpus. So, again, nice but completely unaligned with the massive valuations in that space right now.
Introducing: GPT-ns4w
I am not sure that is a reason to bet on them making any breakthroughs needed for AGI.
I agree that OpenAI has provided no evidence they are anywhere close to AGI (and I don't think LLMs are sufficient) but also I think OpenAI should get a ton of credit for being the first party to make LLMs that actually useful.
The more interesting question, I think: is OpenAI actually a good business? Can they generate the resources they need to meet their goals and keep control, without selling to big tech companies that will derail their plans? Do they have enough of a moat and can they benefit from network effect to make their products more valuable over time, without getting copied? They realised they need a lot more capital then they initially thought. Time will tell if Microsoft is able to take over - see what recently happened with Inflection AI: After raising $1.3B, Inflection is eaten alive by its biggest investor, Microsoft https://techcrunch.com/2024/03/19/after-raising-1-3b-inflect...
This tells you everything you need to know about the author. Anyone that has solved difficult problems knows that evidence of being close to a solution is not a thing. In fact, by the time you're close, proving you're close is _harder_ than finishing the solution.
Easy to criticize, much harder to offer effective solutions.
Because both sides don't have conclusive arguments?
Garbage article for clicks to pay for his lifestyle, now that he's grifted his way into being an "AI Expert" paid to pontificate with no skin in the game.
Maybe so, but is it ALL humans do???? If not, then what about, how to do, the rest???
Also GPT-4 is still leaps and bounds more capable than the competition. Anyone pushing large-language models to their limits today can easily attest to that. Claude 3 Opus comes close, but is significantly more expensive and much harder to do function calling with.