Additionally these 'A.I apps are just gold rush' comments are too dismissive about the potential. It may be that there are some low hanging fruit that will get picked first, but I personally think that the vast majority of opportunity requires some level innovation, creative thinking, and grit to utilize.
When people knee-jerk react negatively to these kinds of stories, I can only picture envy and an insecurity of not truly knowing where AI will take the software field, despite the amount of certainty displayed by many.
If you look around you still have companies making millions with shitty SAP like ERP software from the 90s.
The problem is that when you live in your bubble, you don't tend to notice the problems other people have.
For a man with a hammer, everything looks like a nail.
Edited to add: This is not an indictment on the college student in question! Congrats to him for being prepared to seize on an opportunity and executing well.
His product gives you a few thousand tokens for free, then makes you add your own OpenAI API key after that, so that he isn't paying for the chat responses from then on.
The opposite of a lottery.
> Congrats to him for being prepared to seize on an opportunity and executing well.
Those two statements seem incongruent.
That might or might not be a smart move in the current context, but the tragedy of students grinding Leetcode for obnoxious jobs gatekeeping needs to stop.
[EDIT] My point is, the practice doesn't continue for "gatekeeping" for its own sake, so if you want it to end, you'd need to address the reasons it's happening. There are obvious ways to solve the stated problems more-cheaply and with less harm to candidates, which leaves unstated problems. The wage-suppression explanation fits pretty well (including with past, proven behavior—this is clearly something they're quite concerned about) and is one of the few things that could justify the expense of the current system versus cheaper alternatives that would solve the stated problems.
I think the DOJ frowns upon that kind of behavior.
My second thought was: This sounds like another idea that could easily be killed off by Google or similar companies, if they just add a "this is googlebert, your personalized analysis robot - just link your website/upload docs and ask away!" function.
Or this kind of functionality simply becomes part of the batteries included package for ChatGPT etc.
No way is being subsumed the only outcome.
These kind of predictions are pointless IMO. Just look at how many products Google killed because there is no mass audience for it. Not even talking about Google+
Are there any ideas out there that could not be easily overtaken by Google or similar?
I'm running a somewhat similar website (https://Docalysis.com/) where users chat with files, and it's clear there's a lot of value being added, so people are willing to pay.
What's less clear is how this all plays out when there's more competition, but it's not like we'll all go out of business. It'll just be a bit harder, or you'll have to do things to differentiate. I'm planning on differentiating more, just using the current product as a starting point. Yasser probably is thinking along similar lines.
Good on him for making this much cash. Sure, right time and right place but you still need to be able to execute.
We have been adding LLMs to our products in a little test for clients who want to try it. It's not a fad as far as I got the feedback; it saves people are lot of work and allows juniors at our clients to do far more. I believe it retains clients and will pick up new ones. So when the fads with LLMs die out, the products that have marketshare and add them (b2b anyway), will benefit a lot.
Many of these companies are throwing money at these kinds of early-mover products, but are likely to stop doing that as people start to get wise to just how useful all this actually is, and for what.
[EDIT] This is not to claim that AI-based products won't find actually beneficial uses in various companies, but a lot of these trivial get-to-market-fast companies are raking in quite a bit of money from hype-chasers, and that money isn't going to stick around long-term.
Pre 2021: Useful semantic search widely deployed, used for recommendations for sales. OpenAI has an answers endpoint specifically for this technical use case, full docs, and many companies implement this internally.
Mid 2022: People like me experiment with GPT answering using semantic context on dynamically uploaded content, e.g. https://www.youtube.com/watch?v=7V3VkNj2bag - but cannot get approved for this use case by OpenAI.
Nov 2022: GPT had banned open ended responses and chat like interfaces until now. Explain paper is the first tool that allows you to chat with your PDFs with multiple questions, and to use it on many types of PDFs, released in November IIRC and viral on twitter: https://twitter.com/rauchg/status/1596220185727275008
December-January 2023: 15+ Chat with PDF tools are launched, including pre-hyped launches on ProductHunt from pivoting products January+ onwards: Almost daily launches, sometimes with USPs. I launched AnyQuestions.ai which was at the time the only question tool able to work with videos [1]. Chatbase catches virality on twitter and makes some other sound decisions [2].
Feburary 2023 onwards: No platforms since February have succeeded.
[1] But it took a long time to process with Whisper and the interface was more answer focused than chat-and-response focused; I also made bets on including a larger context and using non-typical embeddings (looking for entailment/contradictions of the sentence as well as semantic similarity) which turned out to not be the right commercial choice.
[2] Allowing embedding on websites, and sharing the Chatbase branding that way was powerful, as well as being able to easily work for new websites as a quick solution. It spread easily through competitor FOMO and was a fast product.
But, that means they are uploading customer data to OpenAI servers, right? Umm, wonder the legality of that if you don't mention this point in your Legal Terms section (I just quickly searched for OpenAI and couldn't find anything)
0 - https://www.mosaicml.com/blog/mpt-7b out Mosiac
So sure, be shameless, but don’t be surprised when your copy/pasted version of the example doesn’t take off.
OpenAI had a serious focus on this that did see adoption; as early as Spring 2021 this was in their docs (linked by others), but more crucially, they quickly added an Answers endpoint(https://platform.openai.com/docs/guides/answers) specifically for this use case (query against uploaded files).
The creator of this project did almost everything perfectly, for sure. They had speed, UI, a converting page, found virality. But they also had timing: OpenAI had banned this open-ended use case till that time, they got lucky that people did not develop Explainpaper competitors, and ChatGPT improved quality while reducing run costs by 10x right when they came into this area, meaning many companies that fore-aware of these changes would have taken it's place, were not trying to enter the market (OpenAI approval was hard, and the costs of models before December 2022 were literally 10x as much for decent quality, making it unprofitable)
"by losing $2 million"
Well make it up in volume, famous last words.
Honestly that entire interview just reveals how much of that poor guys thinking is already infected by the virus called capitalism. Accumulating personal wealth shouldn't be the end goal here. It's short-sighted and perpetuates the very system that created inequality and injustice in the first place.
Instead of focusing on personal gain, why not channel that energy into organizing and fighting for a more equitable and just system? Learn about alternatives to capitalism and push for systemic change that benefits everyone, not just a select few who manage to "take advantage of opportunities."
Capitalism thrives on people thinking they can get rich quick, while most end up struggling and the rich get richer. By pursuing this goal, you're just playing into that system. Focus on the greater good and work towards a future where everyone has a fair shot, not just the ones who keep up with AI or whatever the next tech trend will be for the growth imperative to exploit.
More broadly speaking, any number of systems could compensate people for innovation and problem solving (note I don't think competition should be rewarded in its own right, and risk taking isn't necessary in many systems) by providing them an elevated quality of life. The main difference to our capitalist system today is that rather than trying to accumulate as much personal capital as possible, value created from innovation would only go to the individual as needed to incentivize them. The rest would go to improving the world and lives of the people around them. Of course this requires the individual have some agency in that improvement, which we find lacking in the Soviet Union or other failed communist states.
An alternative system could reward all stakeholders equitably, instead of just investors. The larger set of stakeholders include employees, customers, even members of the community that don't actively participate in the business (but they sure are affected by the business' success--witness traffic problems in Seattle caused largely by the growth of Microsoft and Amazon).
It's a brilliant trick the capitalist class has played on us (and apologies to those of you on the capital side--it's probably not your individual fault), to get us to believe that the only way to get competition and innovation is to take their money and give them all the profits.
I'm sure for someone at his level of technical ability there are plenty companies that would offer him an internship that also don't actively make the world worse for many people, like facebook does. Or he could work to unionise and give more leverage to the workers at such companies, and then use that power to fight against some of facebooks more predatory business practices. Facebook, the company is nothing without it's developers, but the working class would not be nothing without facebook.