Coding assistants today are useful, image generation is useful, speach recognition/generation is useful.
All of these can support businesses, even in their current (early) state. Those businesses have value in funding even 1% improvements in engineering/science.
I think that this is different than before, where even in the 80s there were less defined products, amd most everything was a prototype that needed just a bit more research to be commercially viable.
Where as in the past, hopes for the technology waned and funding for research dropped off a cliff, today's stuff is useful now, and so companies will continue to spend some amount on the research side.
Anyway, considering all these things can be done on device, where is the long term business prospect of which you speak?
Now try to mute a video on youtube and understand what's being said from the automatic subtitles.
If you do it in english, be aware that it's the best performing language and all others are even worse.
https://news.ycombinator.com/item?id=41199567#41201773
I expect that YouTube will up their transcription game soon, too.
1. It's overwhelmingly more useful than the [no text] it was replacing, particularly for the deaf or if you want to search for keywords in a video.
2. When it fails, it tends to do so in ways that trigger human suspicion and oversight.
Those aren't necessarily true of some of the things people are shoehorning LLMs into these days, which is why I'm a lost more pessimistic about that technology.
I've come to notice a correlation between contemporary AI optimism and having effectively made the jump to coding with AI assistants.
I think this depend heavily on what type of coding your doing. The more your job could be replaced by copy/pasting from Stack Overflow, the more useful you find coding assistants.
For that past few years most of the code I've written has been solving fairly niche quantitative problems with novel approaches and I've found AI coding assistants to range from useless to harmful.
But on a recent webdev project, they were much more useful. The vast majority of problems in webdev are fundamentally not unique so a searchable pattern library (which is what an LLM coding assistant basically is) should be pretty effective.
For other areas of software, they're not nearly as useful.
This has follow-on consequences for a shattering phase transition between “persuasive demo” and “useful product”.
We can now make arbitrarily convincing demos that will crash airplanes (“with no survivors!”) on the first try in production.
This is institutionalized by the market capitalizations of 7 companies being so inflated that if they were priced accurately the US economy would collapse.
Really? I work in AI and my biggest concern is that I don't see any real products coming out of this space. I work closer to the models, and people in this specific area are making progress, but when I look at what's being done down stream I see nothing, save demos that don't scale beyond a few examples.
> in the 80s there were less defined products, amd most everything was a prototype that needed just a bit more research to be commercially viable.
This is literally all I see right now. There's some really fun hobbyist stuff happening in the image gen area that I think is here to stay, but LLMs haven't broken out of the "autocomplete on steroids" use cases.
> today's stuff is useful now
Can you give me examples of 5, non-coding assistant, profitable use cases for LLMs that aren't still in the "needed just a bit more research to be commercially viable" stage?
I love working in AI, think the technology is amazing, and do think there are some under exploited (though less exciting) use cases, but all I see if big promises with under delivery. I would love to be proven wrong.
LLMs can be used to generate high-quality, human-like content such as articles, blog posts, social media posts, and even short stories. Businesses can leverage this capability to save time and resources on content creation, and improve the consistency and quality of their online presence.
2. Customer Service and Support:
LLMs can be integrated into chatbots and virtual assistants to provide fast, accurate, and personalized responses to customer inquiries. This can help businesses improve their customer experience, reduce the workload on human customer service representatives, and provide 24/7 support.
3. Summarization and Insights:
LLMs can be used to analyze large volumes of text data, such as reports, research papers, or customer feedback, and generate concise summaries and insights. This can be valuable for businesses in fields like market research, financial analysis, or strategic planning.
4. HR Candidate Screening:
Use case: Using LLMs to assess job applicant resumes, cover letters, and interview responses to identify the most qualified candidates. Example: A large retailer integrating an LLM-based recruiting assistant to help sift through hundreds of applications for entry-level roles.
5. Legal Document Review:
Use case: Employing LLMs to rapidly scan through large volumes of legal contracts, case files, and regulatory documents to identify key terms, risks, and relevant information. Example: A corporate law firm deploying an LLM tool to streamline the due diligence process for mergers and acquisitions.
Spam isn't a feature. See also, this whole message that could just have been the headlines.
> 2. Customer Service and Support:
So… less clear than the website and not empowered to do anything (beyond ruining your reputation) because even you don't trust it?
> 3. Summarization and Insights:
See 1, spam isn't a feature. This is just trying to undo the damage from that (and failing).
> 4. HR Candidate Screening:
> 5. Legal Document Review:
If it's worth doing, it's worth doing well.
We specifically don’t do programming prompts/responses nor advanced college to PHD level stuff, but it’s really mediocre at this level and these subject areas. Programming might be another story, I can’t speak to that.
All I can go off is my experience but it’s not been great. I’m willing to be wrong.
1. content: Nope, except for barrel-bottom content sludge of the kind formerly done by third world spam spinning companies, most decent content creation stays well away from AI except for generating basic content layout templates. I work as a writer and even now, most companies stay well away from using GPT et al for anything they want to be respected as content. Please..
2. Customer service: You've just written a string of PR corporate-speak AI seller bullshit that barely corresponds to reality. People WANT to speak to humans, and except for very basic inquiries, they feel insulted if they're forced into interaction with some idiotic stochastic parrot of an AI for any serious customer support problems. Just imagine some guy trying to handle a major problem with his family's insurance claim or urgently access money that's been frozen in his bank account, and then forced to do these things via the half-baked bullshit funnel that is an AI. If you run a company that forces that upon me for anything serious in customer service, I would get you the fuck out of my life and recommend any friend willing to listen does the same.
3. This is the one area where I'd grant LLMs some major forward space, but even then with a very keen eye to reviewing anything they output for "hallucinations" and outright errors unless you flat out don't care about data or concept accuracy.
4. For reasons related to the above (especially #2) what a categorically terrible, rigid way to screen human beings with possible human qualities that aren't easily visible when examined by some piece of machine learning and its checkbox criteria.
5. Just, Fuck No... I'd run as fast and far as possible from anyone using LLMs to deal with complex legal issues that could involve my eventual imprisonment or lawsuit-induced bankruptcy.
We just don't have that.
We have autocomplete on steroids and many people are fooling themselves that if you just take more steroids you will get better and better results. The metaphor is perfect because if you take more and more steroids you get less and less results.
It is why in reality we have had almost no progress since April 2023 and chatGPT 4.