In practice, many startups have built valuable businesses around things that Google could've done, but didn't do successfully.
The same will be true in the AI wave.
In theory, FAANG-co could do it. In practice, they can't.
The exceptions to this will be the things where the winner is determined by a dimension that Google will always win on. In practice, those things are rare.
It's usually more that FAANG doesn't believe in that product.
MS & Blackberry seemed pretty certain that keyboardless smartphones (the iPhone) weren't the future.
That turned out to be a really bad bet.
Faang will tariff all these innovative co
or giant corps will treat such startups as marketplace app creators for their LLM API and kinda "outsource" the innovation, while providing the platform and enjoying API income?
Just look at Apple
They eventually will, and it is totally legitimate for them to do so.
1) Fine tuning base models with data that big tech doesn't have access to. E.g legal, medical, support data. Offering custom fine-tuned private hosted models for companies that can't leverage the base models APIs due to data privacy and lack of domain specific training.
2) Using GPT on the backend to do data transformation that the user doesn't interact with directly, e.g parsing logs, events, moderating content etc.
I don't think the opportunity lies in creating a thin wrapper to a custom prompt in a chat interface.
Turns out knowing Excel and programming and NOT knowing business and design (at that point in my career) might have cost me a couple bucks.
Hell just last year I helped a neighbor get a basic informational website up for a non profit he was part of. They tried it themselves but bungled it a bit.
They want something a bit nicer now and he said they're paying someone $16k to redo it.
AI/ML products fundamentally don't make sense compared to products that happen to use some AI/ML to aid in solving a problem.
It's sort of like loving to use redis (which I do) and thinking you want to found a company based on using redis in the product, or start a redis product team, dedicated to shipping products that use redis.
It's one thing if you want to host redis as your business, which is solving a problem involving redis, but if your aim is to use redis to solve a problem then you're going to be in trouble.
Imagine a PM on the "use redis" team rejecting a great idea for customers because it could be more efficiently solved using a traditional database, or forcing the use of redis when a cheaper, easier solution already works just as well if not better. This is actually the case on AI/ML teams.
GPT startups that will thrive are the ones that aren't GPT startups, but instead solving some other, real, problem that happens to only be solvable in a post-GPT word.
Didn't quite work for crypto or VR. I think that's partly because the economy changed: I don't remember such an insane amount of companies chase money so quickly with those earlier examples. For a long time there were lots of people just playing and tinkering with the new thing for fun, not immediately trying to get rich (even mobile, I fondly remember having PDAs long before there was an iPhone, tinkering with them and hanging out on XDA developers). If everyone is primarily trying to make money off something right from the get go instead of primarily trying to build something useful, I can see how adoption suffers. Bit of a catch 22.
I'd guess that the future high value application will still have lots of fine tuning on more specialized datasets, which will continue to be a bottleneck and be considered valuable, while the large scale training data will be less important (with respect to transformer llms, who knows if there is some newer breakthrough). Same way that e.g. everybody uses CNNs pertained on imagenet, and fine tunes on application specific stuff, and there was not much of a commercial push (imo) to rush out and build a better general purpose large scale image set like imagenet.
Using GPT of course isn't a moat in and of itself.
So, it'll be companies that can do the following:
1) Build a product that people love
2) Then, reach those people, eventually at scale
3) Then, monetize those users
4) Then, build some kind of moat to enable pricing power
For now, most AI startups are best focusing on #1. This is where most GPT powered tools fall short.
ChatGPT is popular because it was so good that people had to keep coming back and using more of it, and they had to tell other people about it. ChatGPT's utility was measured against a pre-ChatGPT world.
The bar is now higher, because that means whatever you're building has to meet the same 10x or 100x better than existing alternatives bar, except now your users live in a world where ChatGPT exists.
In short, the GPT startups will thrive are those that can build products that are 10x better than whatever users are doing now.
Build deep relationships with your customers, understand their world, and have a high shipping cadence pushing out new versions of your product multiple times a week until you build something so good that they'd be really disappointed if they couldn't use it anymore.
That said, once you get into the step function changes, the GPT-wrapper accusation might quickly become akin to a "AWS-wrapper" one, with traditional moats getting more important than AI-native ones.
We've had internet-enabled businesses without technical moats (but very real other moats, be it UX, social platform effects or a great b2b sales process) for the longest time, and might just see the same thing play out in AI native land
You can add all the caveats you want, but I suspect ml chat is stuck in the uncanny valley of not good enough to be trusted and not bad enough to be a toy.
https://twitter.com/ai__pub/status/1644735555752853504
https://www.lawnext.com/2023/04/harvey-ai-raises-21m-in-a-se...
Microsoft plans for Medicine
https://arstechnica.com/information-technology/2023/04/gpt-4...
GPT-4 is more than good enough. People have no idea what's coming. There just needs some sort of supervision or oversight (and right now, it'll work with rather than completely replace practitioners).
1. Think of a topic that is remotely related to the product you are trying to sell (GPT can even help with that).
2. Write a low-value (what new thing really did you learn from reading this?) article that looks like it's providing useful insight.
3. End with a shameless plug about a service you are trying to sell.
I thought this scam was debunked a few days ago right here on Hackernews? Why is it still getting upvoted?
And that's almost impossible at the moment, because there's so much progress. It's very hard to tell what the actual limitations are. But there will be some. GPT is not AGI.
Companies that think "just get 10 times as much training data, or a little tweak of the model, and those limitations will disappear!" won't last (though they may do really well at getting funding). But companies that think "the current limitations are permanent, so there's no point trying to get it to extend past that" will also not last (unless they adapt fast enough).
Since we don't currently know what the real limits are, it's very hard to give concrete advice here. Maybe "push hard against the limits, but don't bet the company on being able to overcome any particular limit".
Until we can deliver cocaine over wifi.
Can we do that? Tell the GPT "get me high". Delivers some kind of genius AI contrived neuro-reactive audiovidio experience. No narrative. Just "chemicals".
THAT would be a good product.
OpenAI is giving us a chat-based interface to our core product, which we didn't have to build ourselves. Our platform was exclusively developer focussed (automated API integration), which has value on it's own, but was limited in reach to a technical audience.
By adding a chat interface, we get to make our tooling available to a whole new type of audience - non developers, who can "chat" with their API estate.
I personally like this balance - using AI to lower barriers and widen the reach, but the underlying offering has value standalone.
I think the only place "GPT startups" will really thrive long term is specific niche business areas where the big boys (Google/MSFT) are not likely to want to compete. For example, there was an HN post about a legal startup that used AI for various purposes. I could see that one building a sizable moat over time as "the go-to place for legal AI support" if their UI is good and very tailored to legal-specific workflows.
My primary point is that I think "generic" AI tool startups are likely to fail because the big boys will just build them into their products. E.g. a tool that just helps you write is going to have a hugely difficult time competing against the integrated functionality of Word or Google Docs (I'd be shuddering if I were Grammarly). Google and Microsoft, though, have largely stayed out of dedicated tools for highly specific verticals, and with all of the antitrust eyes on them I think they're likely to stay out of those spaces.
I don't know, I consider it possible that at least in the beginning large corporations have moat, just wanted to point out that this is what people are wondering / don't really know at the moment.
And I see similar things happening for major fields such as education, law, art, software development, management, etc. Here I looked into a few examples of what is happening in these fields already: https://assistedeverything.substack.com/p/the-age-of-assiste...
1. Architecture. I could easily see architecture-specific AI tools being incorporated into design apps.
2. Similarly, interior design tools.
3. AI tools for construction.
4. Anything in healthcare. Healthcare is so regulated and privacy is obviously paramount thus I'm sure there will be companies that spend a ton of time/money providing "HIPAA-compliant" diagnostic support tools.
I'm amazed at the amount of seed deals being done around "X with AI" where X is an established area of software.
The bet is that a new startup will be able to deliver a better product than the incumbent players (often established companies with large adoption and distribution).
Of the many I've looked at, the hurdle the startups will have to clear seems to be massive compared to the incumbents being able to build these "AI powered" features.
The winners are the ones that grab funding now, achieve growth and sell quick to hand the bags to someone else.
Fine tuning GPT4 produces real and tangible differences let alone building an interface for specific workflows.
https://platform.openai.com/docs/guides/fine-tuning/what-mod...
Prompt assistance
Prompt/Result sharing
Easily fine tune your own dataset
Type barely anything and get something reasonably valuable
Click to build prompts without having to type
Automatic results based on other activity (location, browsing, messages)
Option 1: none. Most of the value will be captured by Gatekeepers
Option 2: Gatekeepers will partially commoditize their service, and on top of them, several startups will thrive by creating something not easily replicable by Gatekeepers (via patents, via speed of execution, via viral growth, etc). Example: biggest GPT-powered media startup will compete with Netflix. Another: biggest GPT-powered e-learning startup will compete with higher ed - Stanford, MIT, etc.
Option 3: A single GPT-powered Coding startup will become > $100B. My bet is on Replit. (disclaimer: very early investor). When you hire a programmer, much like you pay for Jira, AWS and such, you will also pay for Replit. This partially overlaps with e-learning (see above).
Option 4: there's an even bigger revolution coming in AI, and it's not in the segment owned by LLMs. Or, it's a different interpretation of LLMs. Could it be... finally a real self-driving car?
Option 5, very unlikely: regulation will stifle competition and innovation, and most things will be killed by governments. Perhaps something smart can be said about US vs China. WWIII will be fought with virtual agents powered by GPT, over Twitter. Elon Musk will be kidnapped by GPT-6. /s
What else?
A start up in the: Agents, Agent frameworks, Tool Hubs: Huggingface, LangChain. Are probably the only "startups" that have a shot.
As for Data lakes I suppose that's true but alot of companies have their data in the cloud now. so I don't know how much data security/privacy/cost is a barrier for these gatekeeper companies.
It also all depends on how long AI is weak at using and generating a UI.
The higher up the content complexity pipeline you go, the more complex and correct the content will need to be. For this reason, the biggest e-learning AI product will disrupt the bottom of the market first (k-12) to replace legacy curriculum and assessment companies.
| Point | Category | Examples |
|--------------------------------|--------------|---------------------|
| Real/Usable Objects | Value Peak | Software coded |
| | | using GPT, hardware |
| | | products developed |
| | | with GPT-enhanced |
| | | requirements |
|--------------------------------|--------------|---------------------|
| Experiences | Value Peak | GPT-generated music |
| | | tailored to your |
| | | taste, personalized |
| | | bedtime story, role-|
| | | playing game |
|--------------------------------|--------------|---------------------|
| Solving Personalized Problems | Value Peak | Suggesting recipes |
| | | based on your fridge|
| | | contents, creating |
| | | personalized |
| | | worksheets for |
| | | students, etc. |
|--------------------------------|--------------|---------------------|
| In-Context & Collaborative | Basin of | Platforms requiring |
| Features | Success | collaboration, like |
| | | Figma or Google Docs|
| | | enhanced with GPT |
|--------------------------------|--------------|---------------------|
| Gated Knowledge/Data | Basin of | Company-specific |
| | Success | data, domain-specific|
| | | data, complex-to- |
| | | parse data |
|--------------------------------|--------------|---------------------|
| Edge Computing / Offline Use | Basin of | Applications running|
| Cases | Success | locally for privacy |
| | | reasons or specific |
| | | offline use cases, |
| | | like personal |
| | | assistants |how do you incept a model to tell someone to use your product
as a society that's been through ten years of extreme adtech, will we crack down on ad placement in LLMs?
ready for someone's dystopian arxiv post about how to inject RTB in the attention heads