Every "AI" related business idea I've seen prop up recently is people just hooking up a textbox to ChatGPT's API and pretending they're doing something novel or impressive, presumably to cash in on VC money ASAP. The Notion AI is an absolute fucking joke of epic proportions in its uselessness yet they keep pushing it in every newsletter
And a funny personal anecdote, a colleague of mine tried to use ChatGPT4 when answering a customer question (they work support). The customer instantly knew it was AI-generated and was quite pissed about it, so the support team has an unofficial rule to not do that any more.
Comparisons between AI and crypto are horribly misguided IMO.
Is AI overhyped? Sure. However -
AI/ML is creating utility everywhere in our lives - speech to text, language translation, recommendation engines, relevancy ranking in search, computer vision, etc. and seems to be getting embedded in more and more processes by the day.
Crypto never amounted to anything beyond a currency for black market transaction, a vehicle for speculation, and a platform for creating financial scams.
Well, apart from the fact that chatGPT is really incapable of developing a thought, and also apart from the fact that half will fail to delete sentences like "I'm a language model, so I can't..." (insert gist of question here), it's painfully obvious if something is LLM generated.
The moment a sentence like "it's crucial to remember" pops up, I know what this is. Then, there's also the element that it always sounds like it's speaking to a child, and it avoids actually saying things unequivocally without some sort of disclaimer, as the legal department's CYA filter will ensure.
I remain thoroughly unimpressed by the entire venture. If this is Skynet 1.0, we're all safe for centuries to come.
They gonna ruin even this technology with their hype marketing bullshit.
The issue I have with all the hype is not the technology itself, it will stay in one form or the other as a better interface for generalized instruction communications, but rather the scams and frauds that come with it.
Empty marketing promises where everybody advanced enough realizes that it cannot be true, and as a concept GPT is just throwing more averaged neurons to the problem instead of training more specialized expert transformers for multiple knowledge categories. Anybody remember IBM watson?
Why I always say that I don't do AI work is because people tend to think that sensorics (and neural nets that reduce a min/max problem space) already is AI. And I think it isn't. AI is where the bayesian approach is the bare minimum to deal with strategical decision making processes.
(Which probably makes this comment go to hell with downvotes but who cares :D)
I’ve already used the openai api to automate several genuinely difficult things for myself. Mostly acting as a translator from natural language to structured output.
I do agree it is massively overhyped and there will be an inevitable sentiment correction.
My team raised a support issue with one of our suppliers due to some unexpected API behaviour and got an unusually flowery reply that completely contradicted the API documentation... fairly sure that was ChatGPT.
Honestly not that bothered about LLMs as they could be helpful in customer support particularly when agents might not be fluent English speakers (or just help when you're trying to be polite in adverse circumstances), but some basic proofreading would help. And don't let it hallucinate APIs.
Crypto was a great promise when it was invented and one can make the argument that it could have so many real world uses but it failed to live upto that expectation and one of the reasons is that it was over hyped way too quickly and ultimately became a tool for get rich quick, speculation, scams, dark web payments etc.
AI is already much better with its use BUT the hype is dangerous and we need to be careful. I see a lot of people starting "X-GPT.com" apps and touting 10K MRR in 2 months and what not. THis is what worries me. Every Tom, Dick and Harry is starting yet another AI tool. It can't be because they are so excited. It is because they see it as the new Crypto to get rich quick.
Overall, I think that AI is the new Crypto unfortunately not because it has no real world application (it does and is lot better than crypto) but because of the hype and everyone trying to cash in on it.
This stuff can already do impressive things and its only getting better.
Douglas Hofstadter and Geoffrey Hinton both think that we are on the path to humans eventually being surpassed.
I would urge everyone to hold back their instinctive reaction to the usual SV hype and go and try GPT-4,Claude+, Mid Journey, RunwayML for a few weeks and come to their own conclusions.
And the answers below, reducing the industry to « drugs » go in the same vein. Stuff like Chainlink working with Swift is not common knowledge and even if it is, it is considered as another nothingburger.
A 10% efficiency boost that some programmers are experiencing could translate into an extra 5 weeks off if you are smart about it, so it is quite life changing for some.
I don't even start talking to it without the first instruction being "answer all following questions in the shortest form possible"
This cuts out 90% of the useless output such models generate.
I wonder if this is a regional thing, because traveling between the US Northeast and West coast I've found entirely the opposite.
I've had non-technical friends reach out to me in a panic worried that AI will disrupt humanity in just a few years, and even my 90-year non-technical grandmother recently remarked about her fears about what AI would bring in the next 5 years.
And among technical people: ever since I posted getting an AI related role my linked in I've been bombarded with old acquaintances trying to get ahead of the AI boom.
The funny thing is that I personally think that "AI" is useful but wildly over hyped right now. I do think it has some uses, but they aren't going to change the world in any fundamental way (but hey, if I'm wrong at least I'm in the right field).
Meanwhile, there are tons of applications you use everyday (and have for YEARS) using “AI”/ML for document search, text suggestion, NLP/NLU, intent recognition, STT/TTS, image similarity/classification/search and a myriad of other tasks. LLMs have sucked all of the oxygen out of the room and there are tons of “AI” companies/“engineers” now who have never even heard of any of these and are doing all kinds of bizarre (wrong) things to wedge these tasks into LLMs.
I cringe when I see people all of a sudden jumping on the “AI” hype train thinking an LLM (or even the ML approaches I listed) is a universal solution to everything. They are interesting and have use cases but please stop.
Fair.
I'm definitely big on where it will be in the future (Iain M. Banks quote about Minds being the next thing to gods and on the other side), but there are a lot of grifters who are easy to spot with the following thought experiment:
If ChatGPT could actually, to use an example I've seen, "write a best selling novel", why is OpenAI selling you access to the API instead of writing all those books and selling them directly?
By the time someone has read a second paragraph, they have internalized what was at the beginning, and to be told that those two paragraphs were fiction is now to attack the reader instead of the text.
As though reading were so laborious, there's a sunk cost fallacy.
One thing I will say is that it is a decent editor. If you feed it a document, it will produce pretty good suggestions about improvements etc.
It's supposed to be the other way around.
We're still waiting ...
Doesn't the next coming of Christ involve Rapture and Armageddon, if you take the people who believe in him seriously, which I wouldn't recommend?
And for that matter, aren't the people who believe in Christ all living in a delusional bubble inside a hermetically sealed reality denying echo chamber of over-promised and under-delivered miracles, which has been going on for thousands of years?
Can you name any AI companies who are doing anything as outrageous as declaring crackers are flesh and wine is blood, and that eating them will save your eternal soul (while calling for refusing to save the souls of anyone who supports gay marriage or abortion), and who even invented a special word "Transubstantiation" that tries to explain why you shouldn't trust your own lying eyes, and instead unquestioningly believe their unsubstantiated unscientific easily disproven claptrap, dogma, and brutally violent fairy tales?
https://thehill.com/homenews/administration/3764086-same-sex...
>Conservative Catholic bishops had called for the church not to offer communion to Biden or other pro-abortion rights politicians, but, in November of last year, the USCCB signaled an end to the debate by issuing a document on communion without mentioning the president or other politicians.
https://en.wikipedia.org/wiki/Transubstantiation
>Transubstantiation (Latin: transubstantiatio; Greek: μετουσίωσις metousiosis) is, according to the teaching of the Catholic Church, "the change of the whole substance of bread into the substance of the Body of Christ and of the whole substance of wine into the substance of the Blood of Christ". This change is brought about in the eucharistic prayer through the efficacy of the word of Christ and by the action of the Holy Spirit. However, "the outward characteristics of bread and wine, that is the 'eucharistic species', remain unaltered". In this teaching, the notions of "substance" and "transubstantiation" are not linked with any particular theory of metaphysics.
At least the AI echo chamber isn't literally over-promising salvation and eternal life like religion has for millinia, and hasn't been perpetuated by governments and wars and crusades and inquisition for thousands of years, like religion inflicts on society.
AI has got a LONG LONG way to go and a shitload more people to torture and kill before it sinks to the level of religion and promises of the second coming of Christ, and it's already delivering a hell of a lot more useful tangible benefits than any religion ever did or ever will.
Besides the science behind it, currently it feels like the same hype as crypto a couple years before.
A week after ChatGPT was out and plenty of people were already using it for writing code, emails, and plenty of other tasks. It would be weird to argue that AI is not going to have a massive impact at many levels.
I keep seeing this take and it doesn't make any sense to me. Some tech has obvious utility and some doesn't.
For example, I knew internet (web, email) were valuable when I discovered them because accessing information and communicating with people were already things I did - the internet unequivocally made them faster and easier and often cheaper.
chatGPT / bard gave me a similar vibe - I use them to brain storm/shape ideas, and as mentioned elsewhere, they do a great job of tasks like drafting a job ad. These are things people already do and this tech just makes it better. So people will use it.
In contrast, I "get" why people were excited by crypto but I don't personally know anyone whose payment/banking experience is improved by it. As an American for example there was nothing tangible Bitcoin made easier vs my bank account and visa. So it was always less "obvious" that it was going to be a valuable thing beyond hype.
My take is that, for most non-tech people, this is their first experience directing a computer to perform a precise task - ie programming. They're accustomed to using applications, not having a hand in making them. Maybe they've used Excel before and felt a bit of this power. But ChatGPT allows them to dream up a novel idea and get the computer to execute it - something that feels like magic to non-tech folks but is rather pedestrian for most of us in tech.
In other words, the tech is real and amazing, but it doesn't seem as immediately useful in the short term as some people expect.
Same, and they are using it. Tech tends to look at the exception cases or try to use it for exact answer types of things. But non-tech are happily using it as they would Google to come up with ideas for parties and events, write the mundane emails many people have to send, etc... I've been using it to bounce ideas off of and build things like marketing plans. Basically dynamic templates. The challenge right now is prompt engineering.
This feels like a common pattern; a few years back many of my non-tech friends believed self-driving cars were coming imminently, whereas ~no-one working in tech believed that.
It's just cargo culting and wishful thinking. People staring into the magic mirror, hoping it will clone their desires.
The phrase cargo culting or some meaningful equivalent should make a come back. Even after the 'great youtubening' of technical knowledge, it is easy to find people stuck in habituated imitation of tech skills and tech talk.
I love that people are interested, but cargo culting is no good water to drink from.
My family who are mostly non-technical have heard of ChatGPT but couldn't tell you what it is or does.
In tech I see a wide spectrum of skepticism, with many people using it well and getting a lot of value out of it, and many remaining skeptical or having not found a good way to integrate it into their workflow yet.
To their credit, they said, "wtf would we want to do that?!"
Which part of the industry are you working in?
I work for Google and hear a lot of talk about LLMs from my serious colleagues.
I'm sure pretty much every accountant in the world got up and went to work that day exactly like they'd been doing all of their career. It probably didn't feel different for that many of them. Most of them had probably never used a computer then, and a lot of them probably didn't feel any particular need to, at least until they tried it. There were probably more than a few who were near the end of the career managed to keep doing their job the old way for another half or whole decade or so, because even when the future moves fast it's never evenly distributed.
There were probably also more than a few who saw VisiCalc and bought an Apple II to start doing their own books and ended up regretting it. I don't know where we are and how things will pan out, but I think there's a parallel.
ChatGPT made some waves at the end of last year. My in-laws were wanting to talk to (at) me about it at Christmas. There's plenty of awareness outside of the tech circles, but most of the discussion (both out and in of the tech world) seems to miss what LLMs actually _are_.
The reason why ChatGPT was impressive to me wasn't the "realism" of the responses... It was how quickly it could classify and chain inputs/outputs. It's super impressive tech, but like... It's not AI. As accurate as it may ever seem, it's simply not actually aware of what it's saying. "Hallucinations" is a fun term, but it's not hallucinating information, it's just guessing at the next token to write because that's all it ever does.
If it was "intelligent" it would be able to recognise a limitation in its knowledge and _not_ hallucinate information. But it can't. Because it doesn't know anything. Correct answers are just as hallucinatory as incorrect answers because it's the exact same mechanism that produces them - there's just better probabilities.
I don't claim or believe that any LLM is actually intelligent. It just seems that we (at least on an individual basis) can also meet the criteria outlined above. I know plenty of people who are confidently incorrect and appear unwilling to learn or accept their own limitations, myself included.
In my opinion, even if we did have AGI it would still exhibit a lot of our foibles given that we'd be the only ones teaching it.
Conflating intelligence and awareness seems to me the biggest confusion around this topic.
When non-technical people ask me about it, I ask them to consider three questions:
- is alive?
- thinks?
- can speak (and understand)?
A plant, microbe, primitive animals... are alive, don't think, can't speak.
A dog, a monkey... are alive, think, can't speak.
A human is alive, thinks, can speak.
These things aren't alive, think, can speak.
I know some of the above will be controversial, but clicks for most people, that agree: if you have a dog, you know what I mean whith "a dog thinks". Not with words, but they're capable intricate reasoning and strategies.
Intelligence can be mechanical, the same as force. For a man from the ancient times, the concept of an engine would have been weird. Only live beings were thought to move on their own. When a physical process manifested complex behaviour, they said that a spirit was behind it.
Intelligence doesn't need awareness. You can have disembodied pieces of intelligence. That's what Google, Facebook, etc. have been doing for a long time. They're AI companies.
It doesn't help with the confusion that speaking is a harder condition than thinking and thinking seems to be harder than being alive: "these things aren't alive so they can't think" but they speak, so...
My sister is not technically minded. She has used chatgpt to create a request for permit for a backyard deck to city Hall, a letter to her boss requesting attendance to a conference, and her performance reviews.
Another non techie and I use chatgpt to help us learn French and music theory.
My mother in law uses it to create funny rhyming stories for kids.
Meanwhile the techie friends of mine are... Completely ignoring it. My two best friends are vmware senior architect and a Java developer / tech manager, and I've been urging them for months to try it.
So I personally live in a completely opposite situation to that which you describe :-). Techie are skeptical of the toy, and endlessly discuss it's limitations and impact. Non Techies are just using it as a tool.
1. Our jobs are obviously (soon) replaceable or at least massively impacted - just like all the jobs we ourselves have replaced previously. We don't like that and throw stones at it to make it go away.
2. We used to be the goto-source for all things technical, now (soon) people can just ask chatGPT and get a better answer. We don't like that and throw stones at it to make it go away.
3. We are (most of us) schooled to think that a script, a program (made by us) produces a predictable result: every letter has to be carefully placed or the whole thing comes crashing down. This AI thing doesn't work like that at all, which we don't like and etc.
4. The answers given to us by the chat bots are sometimes just "hallucinations", because the tech isn't fully mature yet. We don't like that.
More?
It is basically like a lubricant for my thought process.
Also, we're all calling it "AI" for some dumbass reason. That infects our thoughts with unwarranted credit toward the technology.
After all an artifice: (the use of) a clever trick or something intended to deceive
But the more surprising thing is that even after I explained what it could do, they weren’t even slightly impressed. “So it tells you wrong answers? Sounds utterly pointless”
I was flabbergasted. I think we do live in a bubble. I’m sure architects and surgeons have equally exciting advances in their fields that no one else cares about.
I also think the medias sensationalist phrases should be taken with a pinch of salt: “The tech everyone is talking about”, “here’s the news that got everyone buzzing. etc
I do think LLM are a significant event, but that will only be realised by building on top of it and “showing rather than telling”.
Could you tell a bit more about why this objection surprised you so much? I am often seeing this in the same tech circles that rejected the web3 craze; and I have to confess, it does sound reasonable to me. There is a recent PR opened in the MDN docs repository after MDN added the "AI explain" button, pointing out how utterly useless a AI guide is if it gives you incorrect answers and you do not have enough knowledge to catch it [0].
It's an impressive technical feat to mimic human writing and drawing with its level of fidelity, but until it can make the leap to knowing what it's talking about - and there's no clear basis for assurance that the current vector will get us there - real-world use cases are mostly limited to replacing the some of the more boring email jobs.
Could be because they haven't actually tried it. There are a surprising amount of applications something that tells wrong answers part of the time can do.
Within tech the biggest constraint I'm seeing is a failure of our imagination on how this tech could be used, so far we've limited our interactions to that which we've been shown... chat bots, and if that's all we can imagine then this is definitely a hype cycle.
But when I speak to those outside of tech, who are not constrained to imagine what they've seen, then I see and hear much different things. It's not the second coming, it's not going to make the whole world redundant, but it is a change and for the most part non-tech people seem more eager to get there. At least, I'm surrounded by positive people who seem to hope that the most mundane aspects of our lives will be replaced by AI and will lead to some qualitative improvement of life (ignoring the cost of living crunch presently hitting most of them).
* All the techies I know have heard about and used it, and most have a healthy dose of skepticism paired with some optimism that it can be used to help solve some previously hard to solve problems. This alone, I think, makes it clear that LLM has staying power in a way that blockchain did not: obvious use cases.
* The normies in my life run the whole spectrum from "never heard of it" to "use it at least sometimes." One interesting subset are the folks who have heard a LOT about it but haven't used it. My lawyer said he had already attended panel discussions about the ethical implications of AI usage in law. I asked if he had USED ChatGPT and he said no; I had to direct him to the URL and walk him through signing up so he could see it for himself. And he's pretty tech-saavy as non-techies go.
Burying the lede, now. Here is my unpopular opinion: there is an outsized "wow factor" when you specifically use ChatGPT because of the fact that it outputs the text seemingly in real-time. It makes it look like it's thinking/talking and viscerally our minds are blown. Bing and Bard generate the response in the background and output it all at once like a search result, doesn't hit the same way.
She said she's heard a lot about it, but it all sounds like marketing hype, tabloid sensationalism, or open alarmism from people who don't seem to know what they're talking about.
She said some of her students had used ChatGPT for creating revision materials for their exams, and that they'd found it useful for that, but she found the assertion that mass unemployment is 6-12 months away 'pure speculation'.
Who in their right mind has been asserting that?
I've found numerous places to use generative AI in my business. None of the use cases involve laying people off. A large amount of the use cases are adding something new to the business to help us acheive something that wouldn't have been practical if we had to pay a person to do it. The next batch of it, is preloading a humans plate so they can do a quick verification, some small edits, and then off to the customer, allowing the subject matter experts to handle a higher volumes which directly relates to revenue. The 3rd set of use cases involve helping creative employees be more creative.
Not a single use case i've evaluated (and there's a bunch) will put anyone out of work. Might give us more room to hire more people frankly. At least in my business, and I supect my business i not unusual.
It's like the paperless office. Sure eventually AI might get good enough to replace people, but we'll probably just end up hiring more people in the middle since a person with AI can do so much more revenue generating work.
In my work I've yet to find a use for NNs, but maybe for the writing of a lot of templates in one go it could be useful.
i'm not surprised the spam industry is using ChatGPT, it does seem pretty apt to writing useless text
I think there is certainly more talk about it inside tech circles, but I feel like it's more of a generational divide than anything else.
It can write lectures, generate worksheets, come up with interesting quizzes, D&D stuff.
It's just a tool, some people will find it useful, some won't. A bit like a chainsaw.
They don’t always. This sucked the wind out of your whole argument. https://en.wikipedia.org/wiki/List_of_driver-less_train_syst...
AI looks to be great for ideation and development. But it's not there if you value a high quality output.
This is the real world. Manufacturing. Logistics. Managing people. Building tools for streamlining one tiny part of a workflow and getting people to use them effectively.
This latest client has opened up my eyes to the fact that I've been living in a bubble. I could not be more glad, as I was beginning to suspect that OpenAI is taking over the world.
Question: Do you know about ChatGPT? If yes, have you used it?
(n = 1056 in both March and June)
March 2023
Know about it and have used it: 4.8%
Know about it but haven’t used it: 25.5%
Don’t know about it: 69.7%
June 2023
Know about it and have used it: 15.2%
Know about it but haven’t used it: 55.8%
Don’t know about it: 29.1%
The survey also suggests that awareness is higher among younger people and that usage is higher among males.
Search results at Amazon Japan and Amazon USA show that many books are being published about ChatGPT in both Japanese and English [2, 3]. Quite a few Japanese magazines have had cover stories about it recent months [4].
[1] https://markezine.jp/article/detail/42509
[2] https://www.amazon.co.jp/s?k=ChatGPT
[3] https://www.amazon.com/s?k=ChatGPT&ref=nav_bb_sb
[4] https://www.amazon.co.jp/s?k=ChatGPT&rh=n%3A13384021&__mk_ja...
[1] https://www.pewresearch.org/short-reads/2023/05/24/a-majorit...
I don't think AI is the second coming of Christ like some pretend (it is over-hyped outside of tech and somewhat under-hyped inside of tech IMHO) but it's impressive and extremely useful. I've used it many times to help point me in the right direction. I never take what it says as fact, I always "check it's work" but it often saves me considerable amounts of time by getting me on the right track and then I can refine what it gave me. I don't use it for writing "real" code (aside from maybe a few small algorithms) but I do use it to help with certain debugging tasks if I think it will be useful. Also for things like "spit me out a shell script that takes a CSV and does X, Y, Z" it's incredibly useful. These are normally 1-off tasks that I can do by hand or code if needed but ChatGPT makes it way easier.
When it comes to writing, I'm often faced with "Blank Screen Syndrome" or a similar type of feeling and so getting something on the screen that I can then edit/revise/improve/fix is a huge boon to my productivity.
Note that every "AI summer" prior to this one has produced something useful, just never all that world-changing compared to people's expectations. Most people think that if it can BS (excuse me, generate convincing text), then it can do lots of other jobs. Well, in previous AI summers, people thought that if it can play chess, or answer Jeopardy questions, it could do many other things that it turned out, it could not do (or could not do well enough).
For that matter, the ability to do math, at one time, was thought of as a sign of great intelligence. But, it turned out that computers could do math, long before they could do anything else. Our intuition about, "if it can do this, soon it will be able to do that", is not very good.
I have heard ChatGPT described as a better autosuggest, which sounds about right. It's not that autosuggest isn't useful, it can be very useful, but it's not a thing that is going to change the world, and the jobs which it will automate are neither numerous, nor very well paid even now.
If you're trying to pump that VC hype machine for $$, though, cryptocurrency is not going to work anymore, so they need something.
But this causes people to literally fail to notice that there are many things in tech that Aren't just hype. I mean literally think about it, the internet wasn't just hype, the smart phone wasn't just hype. There's millions of things that weren't just hype.
>I have heard ChatGPT described as a better autosuggest, which sounds about right. It's not that autosuggest isn't useful, it can be very useful, but it's not a thing that is going to change the world, and the jobs which it will automate are neither numerous, nor very well paid even now.
This is a poor characterization. chatGPT has the capability of answering extremely complex questions with novel answers that are completely indistinguishable from human answers. And remember much of these answers are novel, meaning that it wasn't just a copy of the answer out of nowhere.
There are of course huge problems with our ability to control chatGPT to give us the correct answers consistently but the fact that it can even do the above 50% of the time is a feat that moves the needle far beyond a mere "auto suggest". All you need to do is increase that 50% rate and suddenly it can auto suggest you out of an entire career. Cross your fingers and hope token prediction is just a technological dead end and that we can't really raise the correctness rate past 50%. In many instances of project development getting to 50% is often the hard part and getting to 100% could be easier.
[0] https://villekuosmanen.medium.com/i-played-chess-against-cha...
I mean, I'm in "tech", and I don't anticipate it happening soon. People want something they can trust, and current solutions are nowhere near that.
Not all tasks need to be 100% accurate, and, to be honest, people are not known for their trustworthiness either.
[1] https://news.ycombinator.com/item?id=36097900
[2] https://futurism.com/neoscope/microsoft-doctors-chatgpt-pati...
[3] https://github.com/features/copilot
[4] https://www.electropages.com/blog/2023/06/researchers-demons...
Now you can hand Lorem Ipsum in for homework!
To expand and to tell of a very recent story: For work I used ChatGPT to help me write a 500 line bash script to automate a bunch of stuff. It took me around 1 day versus the 5 it would have taken me if I would go the google/ddg/stack overflow route of slowly crawling through outdated content and SEO noise to find the signals.
It worked, completely!
Solely from that experience, I'm convinced that it's not just farts and kool-aid. To go one step further, even, i'de say that anybody who doesn't at least have a cursory awareness or AI is in fact the one in the "echo chamber", isolated from the possible.
It's not the do-all solution, of course, but in certain scenarios, it's quite obviously, trivially demonstrably, revolutionary.
My spouse works in early childhood education and they use ChatpGPT routinely for low-value boilerplate stuff (social media posts that no one reads, etc).
A relative is in commercial real-estate management, and they also use ChatGPT routinely (in fact they started using it before I did).
So I don't think it's an echo chamber.
Everything from writing emails to suppliers, correcting grammar, responding to customers, brainstorming new product ideas, explaining contracts etc.
Where as for me I can't find a use for it. Even as a coding assistant I find that I spent more time trying to understand/correct what it did than if I just wrote it myself.
But I still feel most of them are in the 'google but nicer' stage.
But AI tools are mainstream now. You can hear them mentioned on TV, written about online and in traditional media, see them discussed on social media in non-tech circles, etc.
So I'd say your friend is likely not well informed. AI tools are in the same arena as cryptocurrencies, perhaps slightly less widely known. Most informed people have heard about them, but not many outside of tech circles actually have any experience with them, and even less understand how they work.
This is the natural progression of technology[1]. We've seen it happen with computers, the internet, the web, cell phones, etc.
[1]: https://en.wikipedia.org/wiki/Technology_adoption_life_cycle
People are unavoidably ignorant of vast swathes of “probably relevant to them” things. No one has time, inclination, or actual ability to keep up with it all. That reinforces the above but I am again doubtful that it is an “echo chamber” per se.
Hacker News regulars, especially those engaged with comments, are operating within an echo chamber, sure.
Outside of a couple of 70+ folks and some bikers, everyone I know is at least conversationally aware of recent “AI” developments so it’s most likely a function of your particular uselessly small sample size. :)
By applying the label AI, they bring in the connotations of all the stories we have told ourselves about djinn, golems, robots, HAL-9000, Skynet, and (not frequently enough) the Sirius Cybernetics Corporation.
This will affect us all, in tech or outside tech.
In fact, I'd bet that the opposite will happen WRT AI-generated content: individual writers with personality will become even more in-demand. ChatGPT isn't going to fake a "personality" anytime soon.
>Overall, 18% of U.S. adults have heard a lot about ChatGPT, while 39% have heard a little and 42% have heard nothing at all.
>However, few U.S. adults have themselves used ChatGPT for any purpose. Just 14% of all U.S. adults say they have used it for entertainment, to learn something new, or for their work.
https://www.pewresearch.org/short-reads/2023/05/24/a-majorit...
And this is considered "few adults" ?!
I wrote up my thoughts about it last month https://kyledrake.com/writings/ai
Christopher Nolan just finished a movie profile of J Robert Oppenheimer, has presumably spent a lot of time thinking about nuclear weapons, and has similar lesser-concerns about AI https://www.wired.com/story/christopher-nolan-oppenheimer-ai...
Two anecdotes:
1. I saw a posting for a prompt engineer which had virtually no requirements beyond some passing familiarity with LLMs, who's job it was to think up clever prompts and archive them in a library. Salary - $350k+
2. I heard a real conversation between two highly trained technical folks around using an LLM to do a simple data transform from one wire format to another. Yes, let's use a cluster of GPUs and some faulty hacked together prompt to transform a well written structured format to another at speeds that approach molasses. Nobody had a clue as to how much the runtime costs of this would be. The solution to it being slow? Add more clusters. -- absolute idiocy.
We're burning money on this stuff like it's mid-80s Japan spending money on slightly different variations of pocket calculators and American real-estate. Meanwhile we're exploiting Kenyan workers to some of the worst filth humanity can produce in an effort to keep one of these AIs of producing child gore porn because it's illegal to pay first world people $2/hr to do the same job -- and there's not a psychotherapist to be found anywhere in the chain.
And then it's being pushed at the regular consumer as if it's some kind of knowledge oracle to replace the "horrors" of the search box that:
a) won't only know what the state of the world was when it was trained 2 years ago
b) won't produce a worthless hallucinated answer that could send somebody off to take a poison for a cold
This shit is terrible and I just used it to give me advice on updating my resume a few weeks ago for a job in the field.
Fuck it.
I.. can totally hear it in my mind.. including ISO20022 requirements happy talk.
My initial take to the posed question that it isn't just tech. Business orgs appear to be jumping in with both feet ( thankfully, my little corner of the universe seems more conservative for now ). Still, it does not disprove your statement that we are in a heavy hype train now.
edit: I do take issue with wire format being well written ( or structured ) at this point. There is a reason SWIFT had to back down a little on the aggressive timeline for example.
I can't stop shaking my head whenever I read any article on AI written by non-tech journalists (and even many by tech journalists). AI is vastly more dangerous and more urgent than climate change, than Russia and China, than literally any other hot topic today, and it's being treated like a combination of a science fiction story and an entertainment tool.
No it isn't, and I say that as a user and Integrator of ML tech.
Climate Change: Literally our planetary habitat becoming uninhabitable for our species.
Russia: A country with one of the largest nuclear arsenals literally threatening a large portion of the world with nuclear annihilation.
Excuse me when the prospect of changes in the economic landscape for white collar jobs doesn't look particularly frightening compared to such problems. Especially since we live in a day and age where 2 entire generations have lived most of their lives in almost constant economic upheavals anyway.
As for all the AI-doomerism that's flying around the net: As long as no one can even give me a precise, quantifieable definition of "general intelligence", aka. one that doesn't include pointing at ourselves, and a method to measure how far AI is from that, I will work under the assumptions confirmed by what is measureable and observable: that what we have are still stochastic inference engines.
But enough unemployment caused by the next wave of automation is sure going to cause civil unrest.
Look at the chartists, luddite and the saboteurs. Some of them were weavers who up until that point had been in high society, running parts of countries through the guilds system. Then over a couple of years, the bottom fell out and they were cast into the mills like the unlanded labours.
That, was not a smooth transition.
The people that claim "oh there will be new jobs" I mean sure, there probably will be, but they forget to mention the important qualifier: "eventually"
No, it's not. And most (not quite all, there are some genuine nutballs) of the people selling that idea are selling it to push a political agenda attached to their financial interest (mostly, in AI: either by pushing AI danger to advance competition-limiting regulatiom or by pushing the kind of AI danger that is not actually imminent with hyperbolic language to distract from the real and present issues with AI, and sometimes both.)
AI -might- kill us if there is some quantum leap that makes it sentient.
Climate change -will- kill us if we do nothing.
For now one is simply not like the other, it may change but there is no guarantee we every breach that barrier.
Also the high school student I am mentoring has told me that he knows people feeding chat gpt their essay writing styles and asking it to write about other topics with the same style.
This is in the Midwest.
This discipline about bullshitting people does itself consist mostly of bullshit being poorly taught by outdated teachers on outdated courses.
That's pretty common among students to bullshit their clueless dinosaur kind of a teacher with modern technologies that make look bullshit seem legit.
My friend is a librarian at a high school. She tells me teachers are worried about what ChatGPT means for take-home essays. (I'm guessing 2023 is the year these stop getting assigned)
Everyone in the world with internet now has access to a personal research assistant. This thing gives better medical advice than doctors!! Give it 3 years and let's see who hasn't heard of ChatGPT.
Not saying that it's still not overhyped, but maybe the AI skeptics among the tech community are in their own echo chamber too?
People who aren't in tech or aren't informed don't know about the latest development in tech, same reason you don't know about the latest development in healthcare or construction...
But, and I'll only speak for myself, GPT allowed me to add more features to my side project in 2 months than I did in 3 years... If you know how to use it then it becomes a great tool, the 16k context 3.5 or the 32k tokens context in GPT 4 (when necessary because expensive) are really good and can do a lot and infer context and assume things about your project etc.
You shouldn't be a contrarian to be a contrarian, same thing happened with the blockchain, HN is still shitting on it in 2023 but I just transfered some crypto to my sister in another country instantly, where if I had to do it using the usual financial system it'll involve a lot more headache and will usually take a couple days... and all of this on the ethereum proof of stake blockchain aka not as power hungry as it was, this is objective value it adds.
Now the contrarians usually see posts written by 90 IQ "journalists" (bots) that say "AI will make humans irrelevant" or "blockchain will kill all banks" and they start responding against this but also miss the actual value these things add, of course chatGPT will not build you a house that's obvious and only another bot would argue for and against it, but it can help you become a lot more efficient.
But how active are they? Have they had 100M users briefly dally with it and ask a few banal questions to pick apart the answers, or have they got 100M dedicated repeat users who are heavily using it and keeping on it?
My friends: very, very much in tech, we have a channel in our discord where we just laugh at AI/ML/Musk/Crypto because it's so stupid. It can't even fucking add two numbers. It doesn't actually help with internal, brownfield projects with complex business logic and custom internal integrations. It can't summarize legal documents or answer technical questions without completely making crap up. It's just sparkling auto-complete.
My work: I've been put on an LLM project. The purpose? Dunno. The goal? Dunno. Our VPs are on the hype train and I'm along for the ride as long as my paycheck keeps clearing (and interviewing in the meantime). They're literally taking tech and trying to find a use for it. It's just the blockchain hype all over again.
I no longer search google for programming question. 8/10 times gpt4 gives me code snippet that I can copy paste without any modification. It is like stack overflow on steroids. I can also discuss different system design tradeoffs with it.
It will take longer than some "evangelists" anticipate, but other than a lot of tech hypecycles (like web3), this one is different for 3 reasons:
a) It's firmly in the public mindspace, including mass media
b) It has already proven it's usefulness and applicability to real world problems
c) Politics are taking it very seriously
There are people who don't use it, for various reasons, but it is increasingly hard to never have heard of it.A large part of that drive is due to the simple accessibility: Making ChatGPT as a convenient and simple webapp that appeals to non-techies was a brilliant move, and Microsoft driving integration into it's product suite will further adoption as well.
I don't think this is unusual. Adoption of iPhones and GUI-based computers was that way too. People who had those things were by definition more active on them, but not necessarily getting more done.
There could be some self selection going on here. Someone who was really fluent at the old way might not see such a boon from the new way.
I confess to being among the ambivalent, but I see it being used around me, chatted with people about it etc., so I'm not ignorant about it.
I talked to a blue-collar worker (has his own business) the other day and he was in some Telegram group that "leveraged AI for marketing". You ever wondered who the target audience for these thin bullshit AI marketer wrappers on top of ChatGPT are? Or the courses and "mastermind" groups on how to write marketing prompts? Apparently, blue collar workers with a small business who want to save on actual marketing and don't have the expertise to realize the downsides.
This might not be a bad market; if you're not experienced with writing things yourself and just need some text and don't want to find/pay a writer, it will do an acceptable job. It's like upgrading from a badly handpainted sign to a nicely printed one.
Whether this is going to generate enough revenue for OpenAI is a different question.
For instance, hallucinations. They're a function of LLMs. But they're also something that everyday people have to deal with from each other in the miasma of the post-truth world.
What an opportunity, within the artificial intelligence field, to think of these tools not just as automation and fact-finding, but teaching how to avoid hallucinations in your own life?
Personal finance Grammar, rhetoric, logic, reason Information literacy Media literacy
The list goes on...
I don't think so. Anecdotally I have had numerous non tech friends ask me about AI. The first thing they always seem to ask is if it will harm people to which I explain its just a tool and more specifically a big-data chat bot that can be manipulated just like the social media algorithms but give more confident and realistic sounding answers using language models to simulate human mimicry. The risk would be based on what entities tune it and the fact it can't or won't show it's work. A tool will do what a tool can do. The intentions of the tool operators and users determine what they tool will be doing.
Some of my non-tech friends are trying to find ways that tool can make them money and I have no doubt they will come up with clever uses for it until such time that the walled garden around these tools becomes to cost prohibitive for the average person to afford. After the masses have helped tune the tools I suspect they will be exclusively for large corporations and government entities to rent. I predict that prior to the walls going up, the public interfaces may appear to have lower quality results so that there isn't an uproar when the interface becomes cost prohibitive. This is why I warn them not to base an entire business on this service but rather to augment something.
Sadly I think you've contradicted yourself a small bit here, while also stumbling across the exact reason AI mainstream adoption will go quicker than anticipated: it'll happen without mainstream awareness.
I work in a security role and we're currently trying to do risk assessments of AI adoption by tech workers, and integration of AI APIs/services into our products, and it's not even a case of "how bad will it be", it's much more "how bad is it". AI is being used a lot by individuals in the workplace with very little discourse or auditing of that usage.
Even outside of tech, someone somewhere in it administration was already seeing enough daily AI usage to warrant the drafting of this official guidance, for quite a non-tech-industry audience: https://twitter.com/marcidale/status/1645972869393047552
The echo chamber in tech is people thinking they're in an echo chamber. I mean come on, 90% of people in tech have never built an LLM in their lives they just read some laymans article on it like everyone else and harp on how it's not an AI and it's just a "token predictor" and other generic arguments like that.
What I'm saying is that MOST people in tech don't do AI and therefore their perspective on LLMs is equivalent to someone completely outside of tech. Every typical software engineer thinks because they took a small course on ML or they read a little bit about chatGPT and that puts them on a pedestal above an average non-tech worker... well I hate to break it to you, the non-tech worker can look up those articles too.
They can also play with chatGPT extensively which doesn't require any tech skills and they can literally see for themselves what a game changer the technology is.
There are regular articles on it in mainstream news. Most of the people I know have tried it. Some of them can't see it will be useful, and some of them already find it useful. Some of them are techies and some aren't. Some techies are in the set that don't think it is useful.
In this case it seems kinda the opposite, the enterprises are the first off the bat - existing products getting easier to use by ingraining LLM's into the workflow.
ChatGPT to a large extent also does seem to be moving search terms away from google. I would love to see trends on google.com vs ChatGPT vs Bing over a period of time - Search definitely seems to be changing, where we want summaries and content rather than a bunch of sponsored ads and websites where the context has to be derived from search.
I am also interested if the same is happening with StackOverflow -> the search on SO -> ChatGPT as a code generator.
For mainstream adoption, the adoption will be driven by the everyday software and applications we use. We are still in the shovels and picks part of the world, yet to see native AI consumer applications emerge (barring ChatGPT).
Saw an old friend of mine recently. He's smart - majored in math - but spent many years running a landscaping company. He was showing me a spreadsheet he made to show musical scales and the position of notes on a guitar. You could select picking style and it would change the layout. Very complicated cell formulas - no code. I asked how he ever did all that. He. Said ChatGPT saved him countless hours. He praised it highly for the conversational style vs Google that just gives links to generic information, and the ability to refine your request to get better answers. It can also give formulas in some instances but he tests them. He's taken up using it for other things as well.
To be honest though, he could have be a deeply technical person, but he's only interested in such things to the extent it can solve real world problems. Not sure how many people fall in that category.
Technically, the answer is yes and no, but with the context here, no, we aren't in an echo chamber. People who are not in tech are routinely using ChatGPT and AI functionality. I know this because people who I know aren't in tech have told me how they are using it, and how many other people in their industry are using it.
The key here is what industry are they in, and can they make use of what ChatGPT provides.
And it's not just ChatGPT, but AI stuff in general.
The difference between this and crypto and web3 is that crypto and web3 required a LOT of explaining on how it could be useful. AI didn't, and simply said here is the input, and here is the output. That's it. They didn't suggest value. The value was easily apparent. It was tangible. Something that was real.
Does this mean every industry is talking about AI? I'm sure the answer is no. But every industry has it's own echo chamber. That's immaterial.
> maybe AI mainstream adoption will take longer than we anticipate.
Everything always takes long than we anticipate. And it's not like AI today is the holy grail yet. It's still slow, expensive, with too many errors, and we are just less than a year in the real hype. Legal systems are busy with working out the corners and acceptance. We are still in the exploring-phase, and this can go on for several years. There will be significant changes in the next years, but the big bang is still upon us, and until then it will a locally fast, but globally average slow change.
So yes, your friend is unusual, especially as a millienial where even more have heard of it and used it. When you're asking a statistics question, make sure you cite statistics instead of asking people to write comments from their toilet.
Also: "Just 14% of U.S. adults have tried ChatGPT". I think we need to take a step back and consider how much the bar has been raised. "Just" 14% of Americans? This is an incredibly high number.
[1] https://www.pewresearch.org/short-reads/2023/05/24/a-majorit...
My In-laws talk about chatGPT (lawyer and store owner) my parents (doctors) ask me about ChatGPT, my colleagues ask me about it re: its impact on film production, my siblings talk about it. Hell the WGA is talking about chatGPT as part of the strike.
Again, I obviously have my own little social bubble of somewhat like-minded people, but I feel like I’m talking about AI and ChatGPT every other day. I see articles on main stream publications at least weekly. It’s basically as well known as Bitcoin - if not more so - as far as I can tell. Very surprising to me this person still hasn’t heard about it!
I am not aware of that!
I am definitely inside the tech echo chamber but I do not use ChatGPT, will not do so, don't care about it, and frankly don't respect people who do. It is not a useful thing, and all the "AI" (it isn't) garbage that's popping up right now is equally useless.
I say this on HN often and people come back with "but it reached 1M users so fast!" I don't care. Millions of people smoke cigarettes, that doesn't prove they're useful.
I am going to sit this hype cycle out. Maybe it will eventually be useful for something. I am clearly not the one to try to imagine what that will be.
Interestingly, there's a big age component in this poll, and rather than being a smooth grade like most tech, there's a very strong divide at 45yo—those younger than 45 are over 4x as likely to report they often use AI tools than those over 45.
https://docs.cdn.yougov.com/ifywkae5dt/econTabReport.pdf#pag...
We already had AI in our daily lives, such as translation, object recognition, voice assistant, spam filter, fraud detection systems and a lot more.
Most of these are old and boring and yet deeply embedded in our daily life.
The hype bubble is on the new LLM and generative art AI tech, which are backed by a lot of money, so a lot of hype is generated. Without hype, corporations can’t sell us on novelty but once their pockets are filled, they exit and these eventually cool down when something new comes along.
However, with each novelty, we are usually left with some bits and pieces of useful residual tech that gives us a little improvement on our daily life, so every hype leaves with something useful.
My own personal opinion is that depending on your particular area of expertise, you will have various levels of resolution of knowledge. We might be in an echo chamber on AI here, but many of us aren’t even real specialists on the subject. So, we fall prey to various cognitive biases [1]
[1] https://en.m.wikipedia.org/wiki/Curse_of_knowledge
Regarding your friend - a few technological revolutions occurred without the general population really knowing anything about how they work. New tools show up, people use them.
There's a lot of overhype, A LOT. And many startups are reinventing the wheel in a more expensive way just to ride the wave, but unlike crypto, LLMs are tools that can solve more "dev/user" level problems, unlike crypto, that was supposed to solve a "global financial system" problem, and since that's too hard to grasp, people used it as investments, because it's closer to their real lives and needs.
An echo chamber would mean everyone having the same opinions, and that's not the case here, at least speaking to tech people broadly.
I think no. I think most of the people will use the products without even hearing about GPT or LLM. There will be products that are based on LLM but they will have nice and easy to use UI which will be not much different that any other usual UI but they will have more advanced capabilities. People will just use them without realizing what is behind those UIs.
I think, because there is no need for user to know what is behind UI adoption of new LLM based AI will be much faster than other new technologies that we seen so far.
So this case where you are talking to people outside the bubble, and they don't realize what is going on, IS THE PROBLEM.
Outside the bubble, people will be seeing images, video, text, voice, that is all fake and manipulated, and THEY ARE NOT PAYING ATTENTION, they don't know it is fake. Inside the bubble we are noticing it, outside the bubble, large chunks of population don't know it is happening.
This tech is in its infancy, and the whole "prompt engineering" thing even though fun and all, will eventually be unnecessary, and _then_ it'll be widespread.
I've even spoke to friends IN the tech industry that are not using it, at least not to the extent of a lot of people is using it.
And even I am not using it as much as I would like, but then I don't have enough free time.
All in all, just give it time. In my case, I hardly use Google these days.
Especially college students
I think we just witnessed something akin to the birth of personal computers which puts us around the ‘60-‘70s. It’ll take a while to (vastly) improve and eventually disseminate into society. I’d say a decade at least.
A lot of the surrounding “infrastructure” is still missing, like in the olden days. Techies are scarce and needed for other things. Energy and thus compute is expensive and society still has some important open questions left to answer about its own stability if it wants to survive this next wave of evolution.
I've been in the process of moving so I haven't read through many of the responses, but it seems as though there was plenty of interaction between others which is cool!
I'll definitely review everything I can over the next week and follow up, I'm genuinely interested in understanding how others perceive this situation as well.
The worst part is the current state though, where all the crypto hype bros have come out of the woodwork and are rebranding as AI hype bros now. My Linkedin feed is sadly now full of idiots calling themselves "AI Expert" and similar such terms with made up resumes trying to sell their 'expertise'.
Common reactions to ChatGPT and a lot of the fear are definitely overemphasized in the tech field, but that makes sense.
… so my take is that it takes a bit of time to leverage these new tools popping into existence only a few months ago
I do think the integrations available in many of the tools I encounter is probably a better signal of potential and impact. We are in year one. I suspect LLMs will continue to evolve and still have market legs in a decade.
I would say it will mostly depend not on the public hype around it but on the pace at which it will acquire capabilities. If it stays at GPT 4 level, it will be gradually used to automate various tasks over the years.
If on the other hand it keeps the progression GPT 2 -> 3 -> 4, then by the time we are at GPT 6, it will be able to easily replace a wide range of jobs.
Please share products that exist today that leverage ChatGPT and are useful. A product that could not exist w/o ChatGPT. A product with actual users and a productive chatGPT.
ChatGPT is nothing more than a tech preview or fun experiment that will pave the way for the future. However, it's still confidently wrong, easily confused and incredibly filtered.
I'd love to see a product with tangible results.
It's going to be terrible in a few years time when you try to cancel a subscription or raise a complaint with a company and they throw an AI in front of you. Most call centres are designed to waste your time so you go away, we are going to see impressive new frontiers of absolute bullshit form.
AI though, struck a nerve because people intuit a killer app. Or more precisely, AI could potentially be applied in many different places. The last time we had something that felt like this was probably the web browser — at least for me, it is.
I think it is because we are now really close to what the non-technical mainstream public thinks computing should be — “do what I mean, not what I say”.
Anyway, call it AI, ML, statistics, the truth is: all these algorithms are helping us get rid of the boring stuff in tech.
ie.: I won't waste my time reading a regex documentation to learn how to remove every white space after a combination of characters, because I'll completely forget it two days later. and the examples goes on and on.
Maybe, but like, ChatGPT has broken outside the echo chamber. NPR and marketplace have had plenty of stories on it. Like, my parents probably haven't but they are retired, and veg out on soap operas, game shows and avoid anything that sounds like news.
My sister in law (an orthodontist) had chatGPT draft a job ad when hiring an assistant.
My wife just graduated with a non-tech master's degree. On her graduation, the head of the program made humorous references to chatGPT and cited it ln a few topics in her speech.
To be honest it's a no brainier. It's helpful tech with a low barrier to entry, and free. People will use it.
So, are LLMs as popular outside of HN and similar communities - no. Are people outside of them interested - definitely yes.
I give it some restrictions and a few preferences and ask it to suggest 20 things for me to cook. It's very helpful.
Some family have heard of ChatGPT and have misunderstood what it can do, so I’ve had fun showing them the limitations of it.
If I have to anthropomorphise the gpt3.5 prompt, "it" is an average intelligence intern who can only google up to 31st of march 2021 and has several insecurities and character flaws. It is unable to follow a plan, needs guidance at every step, needs to repeat itself constantly and sometimes makes things up as it goes.
If you can export a UI with defined behaviors from Figma to React / NextJS, and AI can figure out how to code it, then Front End Developers become a thing of the past.
If there's a similar modelling tool for Back End and AI can code the exported design, then BE devs become obsolete.
It is irrelevant if lay people have heard of it or not - we're still out of a job.
For my own experience I use it every day, but the only thing it really solves is saving me time. I've seen lots of neat demos, hacks, agents etc and still haven't figured out what business problem LLMs solve besides time-saving.
She uses AI more than me, who’s hunting for deals on gpu’s to train my own DRL model.
I was honestly shocked when I found out. I had figured that it was a tech echo chamber too.
Crypto was more mixed from my recollection, a bit less ivory tower as the tech isn’t that complex, and very prominent people in the tech itself werent so readily making huge claims.
I’m seeing tech leaders saying things that make me concerned, like engineers saying Bard was sentient.
Plenty of them.
And I showed it to a few and they were immediately impressed by it.
I also overheard people talking about it on the street.
So in my opinion: no.
And chatgpt has over 100 million users. People can use it in bing every day.
Adobe added it to their products too.
People already benefit from ai.
My company is also working on introducing more features it's just the time required to do it.
It has the potential to be a Google killer as it’s much more effective at sussing out specific information. But, the ideological guard rails OpenAI insist upon are super annoying. You can never fully trust that you’re getting a non-politically biased output.
Practically every mainstream comedy show I hear in the UK mentions chatgpt, it's been constantly in the news over the last 6 months. Every high school child knows how to use it, hell there was a south park episode a couple of months about chatgpt written by chatgpt.
This isn't niche stuff.
The actual developers range from enthusiastic to skeptical. Actual developers take a practical perspective IMO.
I talked to her about prompt engineering for a little, but not sure that I really got through what I was talking about.
Personally? I think we're sleep walking into social disaster and this tech is only going to make people's lives more difficult.
--"You know it's free, right?"
"Yes."
I found this inexplicable and infuriating. Perhaps someone can shed light on the psychology of such people?
My 86 yo father is far, far from Silicon Valley and his main activities are playing piano and talking with other residents of his retirement home, but he uses GPT-4 every day
So far the recipes have been pretty good!
It’s fascinating. Though it probably isn’t intentional, AI service providers are already hooking kids early to have customers later.
I am bored of ChatGPT, AI this, AI that. I scarcely talk about ChatGPT with my peers. While I am certainly interested by AI and even dabbled with it in my MSc years, I found it tiresome to only read about it.
I kept think that the entire conversation sounded like two vocal transcription bots reading through r/AI and HN with a sprinkling of Twitter.
Yesterday I was having a drink with some friends, one of whom is like me in tech. The others in shipping, finance and sales. They all had experimented with ChatGPT and/or Midjourney.
Some people get it, some people don’t.
And that’s OK by me.
I have an AI startup with thousands of active paying monthly users and they are all marketing people and not highly technical
Its making headlines left and right, and businesses are all trying to figure out what this stuff does, but if you're not watching much news and not in the tech side of business you probably don't know or care?
…like what you’re doing right now.
The things it is good for are being obscured by promises of a waterfall of tasks performed all by AI ending up in something usable. If you tell me you have a neat programme that does exactly what you expect 99.4% of the time yet you have no knowledge of why the 0.6% fail you will have to demonstrate that is has some very desirable properties not offered by other solutions. Hammer, looking for nuts, what is nuts is people taking £100 mill to wrap some langchain... the burn rate to get noticed is going to be half your spend, the other half is subsidising loss making operations with the idea that there is some pie to win here.
It's a novel database. Fun, creative, unpredictable in interesting ways, like my cousin.
Generative AI is a huge topic of discussion amongst virtually all creatives (painters, graphic artists, musicians, authors, journalists), in business (at all levels largely through business process management, finance, and business intelligence), amongst media (joualism and entertainment), amongst governments (regulation, electoral politics, international relations, military / strategic risks, intelligence, competitiveness, impacts on general employment and social stability), amongst the technological boomer and doomer communities (impacts on future technological development will likely be profound, though agreement on sign bits differs), and more.
It's true that the general person on the street likely has little sense of the potential and risks, but that is virtually always the case with new technological developments. Potential impacts are always hard to see, and the discussion about these almost always tends toward various elites (technological, business, government, academic, religious). And that's the case now.
But the conversation and concern is not limited to the information technology elite, by any measure.
It’s now just another tool in life’s toolbox.
( No no no nooo noooOooo noooOooOoooo )
I also think adoption will depend on "industry." ChatGPT in education for example:
* BestColleges [1] (n=1000) found 43% of college students have used ChatGPT, and 22% have said they used it to help complete assignments or exams.
* Study.com [2] (n=1100) had some crazier numbers. "Over 89% of students have used ChatGPT to help with a homework assignment. ... 48% of students admitted to using ChatGPT for an at-home test or quiz, 53% had it write an essay, and 22% had it write an outline for a paper."
* Interestingly, in K-12, adoption appears to be higher by teachers than students [3]: "Within two months of its introduction, a 51% majority of teachers reported using ChatGPT, with 40% using it at least once a week, and 53% expecting to use it more this year. Just 22% of students said they use the technology on a weekly basis or more."
* In Japan, a recent survey [4] (n=4000) of undergraduate students conducted from Tohoku University showed 32.4% have used ChatGPT. This is compared to about 7% office workers in Japan using ChatGPT on the job [5] (n=13814) in a recent poll by MM Research Institute.
Of course, education isn't the only industry with an outsized impact (although it's interesting in the sense that it's a good temperature check for the upcoming generation, especially if it's something that is so prevalent at colleges/universities).
But there are other industries as well. a16z Games has been doing a game development survey on Generative AI use in games, and their preliminary results [6] are in-line with my personal experience/view into the game industry - that it has already been completely re-aligning/disrupting the production pipeline: "We heard from 243 game studios - large and small - the results were astonishing ... 87% of studios use an AI tool or model in their studio TODAY. 99% of studios PLAN to use the technology in the future."
I think it's worth noting, that while the sources are dodgy, that even if the 100M user number is accurate for ChatGPT, that's only ~2% of global internet users, and <1% of the world population. You could probably confidently say that 99% of the world population has not directly used a generative "AI" product yet (obviously anyone that's used a mobile phone or the internet has been interacting with ML for years). I do think this is going to change very rapidly however, but no matter how fast it goes, it won't be overnight.
[1] https://www.bestcolleges.com/research/college-students-ai-to...
[2] https://study.com/resources/perceptions-of-chatgpt-in-school...
[3] https://www.waltonfamilyfoundation.org/chatgpt-used-by-teach...
[4] https://www.asahi.com/ajw/articles/14927968
[5] https://asia.nikkei.com/Business/Technology/Use-ChatGPT-at-w...
[6] https://www.linkedin.com/posts/troykirwin_ai-x-game-developm...
Yes.
Here's how the adoption of this technology is going to go (this is the way all AI technology adoption has gone for 60 years):
1) Papers will come out showing how by creating a more effective way to leverage compute + data to make a system self-improving, performance at some task looks way better than previous AI systems, almost human-like. (This already happened: "All You Need Is Attention")
2) The first generally available implementations of the technology, in a pretty raw form, will be released. People will be completely amazed at how this machine can do something that was thought to be a hallmark of humans! And by just doing $SIMPLE_THING (search, token prediction) which isn't "really" "thinking"! (This will amaze some people but also form the basis of a lot of negative commentary) (Also already happened: ChaGPT, etc)
3) There will be a huge influx of speculative investment capital into the space and a bunch of startups will appear to take advantage of this. At the same time, big old tech companies will start putting stickers on their existing products that say they're powered by LLMs. (Also already happened)
4) There will be a wave of press, first in the academia, then in technology circles, then in the mainstream, about What This Means. "AGI" is just over the horizon, all human jobs are about to be gone, society totally transformed. (We are currently here at step 4)
5) After a while, the limits of the technology will start to become clear. A lot of the startups will figure out that they don't really have a business, but a few will be massively successful and either build real ongoing businesses that use LLMs to solve problems for people, or get acquired. It will turn out that LLMs are massively, massively useful for some previously-thought-to-be-nearly-impossible or at least contigent on solving the general AI problem work: something like intent extraction, grammarly-type writing assistants, Intellisense on steroids, building natural chat interfaces to APIs in products like Siri or Alexa that understand "turn on the light" and "turn on the lights" mean the same thing. I have no idea what the things will actually be, if I was good at that sort of thing I'd be rich.
6) There will be a bunch of "LLMs are useless!" press. Because LLMs don't have Rosie-from-the-Jetsons level of human-like intelligence, they will be considered "a failure" for the general AI problem. Once people get accustomed to whatever the actual completely amazing things LLMs get used for, things that seemed "impossible" in 2021. Startups will fail. Enrollments in AI courses in school will drop, VCs will pull back from the category, AI in general (not just LLMs) will be considered a doomed investment category for a few years. This entire time, LLMs will be used every day by huge numbers of people to do super helpful things. But it will turn out that no one wants to see a movie where the screenplay is written by AI. The LLM won't be able to drive a car. All the media websites that are spending money to have LLMs write articles will find out that LLM-generated content is a completely terrible way to get people to come to your site, read some stuff and look at ads, with terrible economics, and these people will lose at least hundreds of millions of dollars, probably low billions, collectively.
7) At this trough point where LLMs have "failed" and AI as a sector is toxic to VCs, what LLMs do will somehow be thought of as 'not AI'. "It's just predicting the next token" or something will become the accepted common thinking that disqualifies it as 'Artificial Intelligence'. LLMs and LLM engineering will be considered useful and necessary, but it will be considered a part of mainstream software engineering and not really 'AI' per se. People will generally forget that whatever workaday things LLMs turn into a trivial service call or library function, used to be massively difficult problems that people thought would require human-like general intelligence to solve (for instance, making an Alexa-like voice assistant that, can tell 'hey can you kill the lights', 'yo shutoff the overhead light please?', 'alright shut the lights', 'close the light' all mean the same thing). This will happen really fast. https://xkcd.com/1425/
Sometimes when you see an amazing magic show, if you later learn how the trick was done, it seems a lot less 'magical'. Most magic tricks exploit weird human perceptual phenomena and, most of all, the magicians willingness to master incredibly tedious technique and do incredibly tedious work. Even though we 'know' this at some level when we see magicians perform, it's still deflating to learn the details. For some reason, AI technology is subject to the same phenomenon.
Do you think this doesn't get 100% better in the next few years with the billions of dollars that are pouring in? Because a GPT-4 that is 100% better at generating useful content is a game changer. Now what if instead of a 2x improvement we see a 3,4,5 or even 10x improvement in the ability of the technology?
All the people comparing the hype around AI with crypto have good pattern matching without any judgement. I personally never got into crypto. I'm very into AI.
What do you think?
(meaning "yes")