But it’s not scary. It’s… marvelous, cringey, uncomfortable, awe-inspiring. What’s scary is not what AI can currently do, but what we expect from it. Can it do math yet? Can it play chess? Can it write entire apps from scratch? Can it just do my entire job for me?
We’re moving toward a world where every job will be modeled, and you’ll either be an AI owner, a model architect, an agent/hardware engineer, a technician, or just.. training data.
After an OpenAI launch, I think it's important to take one's feelings about the future impact of the technology with a HUGE grain of salt. OpenAI are masters of hype. They have been generating hype for years now, yet the real-world impacts remain modest so far.
Do you remember when they teased GPT-2 as "too dangerous" for public access? I do. Yet we now have Llama 3 in the wild, which even at the smaller 8B size is about as powerful as the [edit: 6/13/23] GPT-4 release.
As someone pointed out elsewhere in the comments, a logistic curve looks exponential in the beginning, before it approaches saturation. Yet, logistic curves are more common, especially in ML. I think it's interesting that GPT-4o doesn't show much of an improvement in "reasoning" strength.
It's glib to dismiss safety concerns because we haven't all turned into paperclips yet. LLMs and image gen models are having real effects now.
We're already at a point where AI can generate text and images that will fool a lot of people a lot of the time. For every college-educated young person smugly pointing out that they aren't fooled by an image with six-fingered hands, there are far more people who had marginal media literacy to begin with and are now almost defenceless against a tidal wave of hyper-scaleable deception.
We're already at a point where we're counselling elders to ignore late-night messages from people claiming to be a relative in need of an urgent wire transfer. What defences do we have when an LLM will be able to have a completely fluent, natural-sounding conversation in someone else's voice? I'm not confident that I'd be able to distinguish GPT-4o from a human speaker in the best of circumstances and I'm almost certain that I could be fooled if I'm hurried, distracted, sleep deprived or otherwise impaired.
Regardless of any future impacts on the labour market or any hypothesised X-risks, I think we should be very worried about the immediate risks to trust and social cohesion. An awful lot of people are turning into paranoid weirdos at the moment and I don't particularly blame them, but I can see things getting seriously ugly if we can't abate that trend.
Set a memorable verification phrase with your friends and loved ones. That way if you call them out of the blue or from some strange number (and they actually pick up for some reason) and you tell them you need $300 to get you out of trouble they can ask you to say the phrase and they'll know it's you if you respond appropriately.
I've already done that and I'm far less worried about AI fooling me or my family in a scam than I am about corporations and governments using it without caring about the impact of the inevitable mistakes and hallucinations. AI is already being used by judges to decide how long people should go to jail. Parole boards are using it to decide who to keep locked up. Governments are using it to decide which people/buildings to bomb. Insurance companies are using to deny critical health coverage to people. Police are using it to decide who to target and even to write their reports for them.
More and more people are going to get badly screwed over, lose their freedom, or lose their lives because of AI. It'll save time/money for people with more money and power than you or I will ever have though, so there's no fighting it.
We went from living in villages where everyone knew each other to living in big cities where almost everyone is a stranger.
We went from photos being relatively reliable evidence to digital photography where anyone can fake almost anything and even the line between faking and improving is blurred.
We went from mass distribution of media being a massive capital expenditure that only big publishers could afford to something that is free and anonymous for everyone.
We went from a tiny number of people in close proximity being able to initiate a conversation with us to being reachable for everyone who could dial a phone number or send an email message.
Each of these transitions caused big problems. None of these problems have ever been completely solved. But each time we found mitigations that limit the impact of any misuse.
I see the current AI wave as yet another step away from trusting superficial appearances to a world that requires more formal authentication protocols.
Passports were introduced long ago but never properly transitioned into the digital world. Using some unsigned PDF allegedly representing a utility bill as proof of address seems questionable as well. And the way in which social security numbers are used for authentication in the US is nothing short of bizarre.
So I think there are some very low hanging fruits in terms of authentication and digital signatures. We have all the tools to deal with the trust issues caused by generative AI. We just have to use them.
The nature of this tech itself is probably what is getting most people - it looks, sounds and feels _human_ - it's very relatable and easy for a non-tech person to understand it and thus get creeped out. I'd argue there are _far_ more dangerous technologies out there, but no one notices and / or cares because they don't understand the tech in the first place!
It's still early, and I don't see much in corporate communications, for instance, but it will be quite the change.
I guess we need to have an AI secretary to take in all phonecalls from now on (spam folder will become a lot more interesting with celebrity phone calls, your dead relative phoning you etc)
has been for years mon ami. i remember when they started talking about GPT-2 here, and then seeing a sea-change in places like reddit and quora
quite visible on HN, esp. in certain threads like those involving brands that market heavily, or discussions of particular countries and politics.
Discovering an asteroid full of gold, with as much gold as half the earth to put a modest number, would have huge impact to the labour market. Anything conductive like copper, silver, mining jobs would all go away. Also housing would be obsolete as we would all live in golden houses. A huge impact to the housing market, yet it doesn't seem such a bad thing to me.
>We're already at a point where we're counselling elders to ignore late-night messages from people claiming to be a relative in need of an urgent wire transfer.
Anyone can prove their identity, or identities, over the wire, wire-fully or wire-lessly, anything you like. When i did go to university, i was the only one attending the cryptography class, no one else showed up for a boring class like this. I wrote a story about the Electrona Corp in my blog.
What i say to people for at least 2 years now, is that "Remember when governments were not just some cryptographic algorithms?" Yeah, that's gonna change. Cryptography is here to stay, it is not as dead as people think and it's gonna make a huge blast.
Probably why it's not released yet. It's unsafe for phishing.
- It helps them sleep at night if their creation doesn't put millions of people out of work.
- Fear of regulation
The world learnt to deal with Nigerian Prince emails and nobody is falling to those anymore. Nothing was changed - no new laws or regulations needed.
Phishing calls have been going on without an AI for decades.
You can be skeptical and call back. If you know your friends or family you should be able to find an alternative way to get in touch always without too much effort in the modern connected world.
Just recently a gang in Spain was arrested for "son in trouble" scam. No AI used. Most of the parents are not fooled in this.
https://www.bbc.com/news/world-europe-68931214
The AI might have some marginal impact, but it does not matter in the big picture of scams. While it is worrisome, it is not a true safety concern.
I second that. I remember when Google search first came out. Within a few days it completely changed my workflow, how I use the Internet, my reading habits. It easily 5 ~ 10x the value of Internet for me over a couple of weeks.
LLMs is doing nothing of the sort for me.
ChatGPT does this again for me. I am routinely getting zero useful results on the first page or two of Google searches, but AI is answering or giving me guidance quickly.
Maybe this would not seem such an improvement if Google's results were like they were 10 years ago and not barely usable blogspam
That's not to say it won't have more significant impact in the future; I wouldn't know. But so far, I've yet to see the hype get realised.
Don't use it for things you're already an expert in, it can't compare to you yet.
Use it for learning new things, or for things you aren't very good at and don't want to bother with. For these it's incredible.
Perhaps.
> Do you remember when they teased GPT-2 as "too dangerous" for public access? I do. Yet we now have Llama 3 in the wild, which even at the smaller 8B size is about as powerful as the [edit: 6/13/23] GPT-4 release.
The statement was rather more prosaic and less surprising; are you sure it's OpenAI (rather than say all the AI fans and the press) who are hyping?
"""This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.
…
We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems."""
I spend a part of yesterday evening sorting my freshly dried t-shirts into 4 distinct piles. I used OpenAI Vision (through BeMyEyes) from my phone. I got a clear description of each and every piece of clothing, including print, colours and brand. I am blind BTW. But I guess you are right, no impact at all.
> Yet we now have Llama 3 in the wild
Yes, great, THANKS Meta, now the Scammers have something to work with. Thats a wonderful achievement which should be praised! </sarcasm>
That is a really great application of this tech. And definitely qualifies as real-world impact. Thanks for sharing that!
People read too many sci-fi books and then project their fantasies on to real-world technologies. This stuff is incredibly powerful and will have social effects, but it’s not going to replace every single job by next year.
Have you tried asking it to generate a regex to transform your list into a CSV?
I can't help but notice the huge amount of hindsight and bad faith that it demonstrated here. Yes, now we are aware that the internet did not drown in a flood of bullshit (well, not noticeably more), when GPT-2 was released.
But was it obvious? I certainly thought that there was a chance that the amount of blog spam that could be generated effortlessly might just make internet search unusable. You are declaring "hype", when you could also say "very uncertain and conscientious". Is this not something we want people in charge to be careful with?
Even in this thread people talk about "Oh I use ChatGPT rather than Google search because Google is just stuffed with shit". And on HN there are plenty of discussions about huge portion of reddit threads being regurgitated older comments.
Job seekers currently in college have no idea what is about to hit them in 3-5 years.
Really philosophy seems to be one of the least important subjects right now. Hardly anyone learns about it in school.
If it was so important to success in the wild than it would stand to reason we all work hard at improving our reasoning skills, but very few do.
Even now, they're shipping text-image 4o but not the new voice while leaving old-voice up and confusing/disappointing a whole lot of people. This is a pretty big marketing blunder.
But OpenAI is having a hard time retaining/increasing ChatGPT users. Also, Alphabet's stock is about as valuable as it's ever been. So I don't think we have evidence that this is really challenging Google's search dominance.
Maybe that is GPT-5.
And this release really is just incremental improvements in speed, and tying together a few different existing features.
Go ask any teacher or graphician.
Maybe not GPT-2, but in general LLMs and other generative AI types aren't without their downsides.
From companies looking to downsize their staff to replace them with software, to the work of artists/writers being devalued somewhat, to even easier scams and something like the rise of AI girlfriends, which has also gotten some critique, some of those can probably be a net negative.
Even when it's not pearl clutching over the advancements in technology and the social changes that arise, I do wonder how much my own development work will be devalued due to the somewhat lowered entry barrier into the industry and people looking for quick cash, same as with boot camps leading to more saturation. Probably not my position individually (not exactly entry level), but the market as a whole.
It's kind of at a point where I use LLMs for dev work not to fall behind, cause the productivity gains for simple problems and boilerplate are hard to argue with.
I feel like everyone who makes this claim doesn't actually have any data to backup it up.
~8 years ago when self driving technology was all the rage and every major company was getting on board with ever more impressive technological demos, it seemed entirely reasonable to expect that we'd all be in a world of complete self driving imminently. I remember mocking somebody online around the time who was pursuing a class C/commercial trucking license. Yet now a decade later, there are more truckers than ever and the tech itself seems further away than ever before. And that's because most have now accepted that progress on such has basically stalled out in spite of absolutely monumental efforts at moving forward.
So long as LLMs regularly hallucinate, they're not going to be useful for much other than tasks that can accept relatively high rates of failure. And many of those generally creative domains are the ones LLMs are paradoxically the weakest in - like writing. Reading a book written by an LLM would be cruel and unusual punishment given then current state of the art. One domain I do see them completely taking over is search. They work excellently as natural language search engines, and "failure" in such is very poorly defined.
I think what maybe seems not obvious amidst the hype is that there is a hell of a lot of engineering left to do. The fact that you can squash the weights of a neural net down to 3 bits per param and it still works -- is evidence that we have quite a way to go with maturing this technology. Multimodality, improvements to the UX of it, the human-computer interface part of it. Those are fundamental tech things, but they are foremost engineering problems. Getting latency down. Getting efficiency up. Designing the experience, then building it out.
25 years ago, early tech demos on the internet were promising that everyone would do their shopping, entertainment, socializing, etc... online. Breathless hype. 5 years after that, the whole thing crashed, but it never went away. People just needed time to figure out how to use it and what it was useful for, and discover its limitations. 10 years after that, engineering efforts were systematized and applied against the difficult problems that still remained. And now: look at where we are. It just took time.
Let me know when you can get a Waymo to drive you from New York to Montreal in winter.
Meanwhile I've been using ChatGPT at work for _more than a year_ and it's been tremendously helpful to me.
This is not hype, this is not about how AI will change our lives in the future. It's there right here, right now.
The person I originally responded to stated, "We’re moving toward a world where every job will be modeled, and you’ll either be an AI owner, a model architect, an agent/hardware engineer, a technician, or just.. training data." And that far less likely than us achieving L5 self driving (if not only because driving is quite simple relative to many of the jobs he envisions AI taking over), yet L5 self driving seems as distant as ever as well.
Yep. So basically they're useful for a vast, immense range of tasks today.
Some things they're not suited for. For example, I've been working on a system to extract certain financial "facts" across SEC filings. ChatGPT has not been helpful at all either with designing or implementing (except to give some broad, obvious hints about things like regular expressions), nor would it be useful if it was used for the actual automation.
But for many, many other tasks -- like design, architecture, brainstorming, marketing, sales, summarisation, step by step thinking through all sorts of processes, it's extremely valuable today. My list of ChatGPT sessions is so long already and I can't imagine life without it now. Going back to Google and random Quora/StackOverflow answers laced with adtech everywhere...
The other day, I saw a demo from a startup (don't remember their name) that uses generative AI to perform financial analysis. The demo showed their AI-powered app basically performing a Google search for some companies, loosely interpreting those Google Stock Market Widgets that are presented in such searches, and then fetching recent news and summarizing them with AI, trying to extract some macro trends.
People were all hyped up about it, saying it will replace financial analysts in no time. From my point of view, that demo is orders of magnitude below the capacity of a single intern who receives the same task.
In short, I have the same perception as you. People are throwing generative AI into everything they can with high expectations, without doing any kind of basic homework to understand its strengths and weaknesses.
But is this not what humans do, universally? We are certainly good at hiding it – and we are all good at coping with it – but my general sense when interacting with society is that there is a large amount of nonsense generated by humans that our systems must and do already have enormous flexibility for.
My sense is that's not an aspect of LLMs we should have any trouble with incorporating smoothly, just by adhering to the safety nets that we built in response to our own deficiencies.
mapping th genome was that way. On a 20yr schedule, barely any progress for 15 and then poof, done ahead of schedule
I have a much less "utopian" view about the future. I remember during the renaissance of neural networks (ca. 2010-15) it was said that "more data leads to better models", and that was at a time when researchers frowned upon the term Artificial Intelligence and would rather use Machine Learning. Fast forward a decade LLMs are very good synthetic data generators that try to mimic human generated input and I can't think somehow that this wasn't the sole initial intent of LLMs. And that's it for me. There's not much to hype and no intelligence at all.
What happens now is that human generated input becomes more valuable and every online platform (including minor ones) will have now some form of gatekeeping in place, rather sooner than later. Besides that a lot of work still can't be done in front of a computer in isolation and probably never will, and even if so, automation is not a means to an end. We still don't know how to measure a lot of things and much less how to capture everything as data vectors.
Currently the bottleneck is Agents. If you want a large language model to actually do anything you need an Agent. Agents so far need a human in the loop to keep them sane. Until that problem is solved most human jobs are still safe.
I fully expect GPT 5 (or at the latest 6) to similarly have native inclusion of agentic capabilities either this year or next year, assuming it doesn't already, but is just kept from the public.
Will be like, the end of millions of careers overnight.
It will probably strongly favour places like China and Russia though, where the economy is already strongly reliant on central control.
not quite sure that sanity is a business requirement
I understand that you might be afraid. I believe that a world where only LLM companies rule the world is not practically achievable except in some distopian universe. The likelihood of the world where the only job are model architects, engineers or technicians is very very small.
Instead, let's consider the positive possibilities that LLMs can bring. It can lead to new and exciting opportunities across various fields. For instance, can serve as a tool to inspire new ideas for writers, artists, and musicians.
I think we are going towards a more collaborative era where computers and humans interact much more. Everything will be a remix :)
Oh, especially since it will be a priority to automate their jobs, or somehow optimize them with an algorithm because that's a self-reinforcing improvement scheme that would give you a huge edge.
GPT-4? Not that well. AI? Definitely
https://deepmind.google/discover/blog/alphageometry-an-olymp...
So outside of use-cases where the user can quickly verify the result (like picking a decent generated image etc.),I can't see it being used much.
And guess what: RAG doesn't prevent hallucination. It can reduce it, and there are most certainly areas where it is incredibly useful (I should know, because that's what earns my paycheck), but it's useful despite still hallucinations being a thing, not because we solved that problem.
All AIs up to now lack autonomy. So I'd say until we crack this problem, it is not going to be able to do your job. Autonomy depends on a kind of data that is iterative, multi-turn, and learning from environments not from static datasets. We have the exact opposite, lots of non-iterative, off-policy (human made AI consumed) text.
But everyone is expecting them to release gpt5 later this year, and it is a bit scary to think what it will be able to do.
1) It's natively multi-modal in a way I don't think gpt4 was.
2) It's at least twice as efficient in terms of compute. Maybe 3 times more efficient, considering the increase in performance.
Combined, those point towards some major breakthroughs having gone into the model. If the quality of the output hasn't gone up THAT much, it's probably because the technological innovations mostly were leveraged (for this version) to reduce costs rather than capabilities.
My guess is that we should expect them to leverage the 2x-3x boost in efficiency in a model that is at least as large as GTP4 relatively soon, probably this year unless OpenAI has safety concerns or something, and keeps it internal-only.
The evidence for that is the change in the tokenizer. The only way to implement that is to re-train the entire base model from scratch. This implies that GPT 4o is not a fine-tuning of GPT 4. It's a new model, with a new tokenizer, new input and output token types, etc...
They could have called it GPT-5 and everyone would have believed them.
The expectations for gpt5 are sky high. I think we will see a similar jump as 3.5 -> 4.
I assume GPT-5 has to be a heavier, more expensive and slower model initially.
GPT-4o is like an optimisation of GPT-4.
Everything always starts as a toy.
That includes, beyond literal Killers, all kinds of manufacturing, construction and service work.
I would expect a LOT of funds to go into research all sorts of actuators, artificial muscles and any other technology that will be useful in building better robots.
Companies that can get and maintain a lead in such technologies may reach a position similar to what US Steel had in the 19th century.
That could be the next nvidia.
I would not be at all surprised if we will have a robot in the house in 10 years that can clean and do the dishes, and that is built using basically the same parts as the robots that replace our soldiers and the police.
Who will ultimately control them, though?
If you had an ASI? I don’t think you’d need a lot of funds to go into this area anymore ? Presumably it would all be solved overnight.
This is no different to saying a person with a gun murdered someone rather than attributing the murder to the gun. An AI gun is just a really fancy gun.
There may come a time where we grow so accustomed to this, that the decision is so heavily influenced by AI, that we believe it more than human decisions.
And then it can very well kill a human through misdiagnostic.
I think it is important to not just put this thought aside, but to evaluate all risks.
A prompt is a _very_ different matter.
And “guns don’t kill people, people kill people”¹ is a bad argument created by the people who benefit from the proliferation of guns, so it’s very weird that you’re using that as if it were a valid argument. It isn’t. It’s baffling anyone still has to make this point: easy access and availability of guns makes them more likely to be used. A gun which does not exist is a gun which cannot be used by a person to murder another.
It’s also worth nothing the exact words of the person you’re responding to (emphasis mine):
> It can also murder people, and it will continue being used for that.
Being used. As in, they’re not saying that AI kills on its own, but that it’s used for it. Presumably by people. Which doesn’t contradict your point.
¹ https://en.wikipedia.org/wiki/Guns_don%27t_kill_people,_peop...
What's scary and cringey are your delusions.
My guess is the future belongs to those who don't stop—who, in fact, embrace the opposite of stopping.
I would even suggest that the present belongs to those who didn't stop. It may be too late for normal people to ever catch up by the time we realize the trick that was played on us.
Varying degrees of greedy / restless / hungry / thirsty / lustful are what we've got, because how is contentedness ever going to compete with that over millennia?