LLMs is too trivial to be expensive
EDIT: I presented the statement wrongly. What I mean is the use case for LLM are trivial things, it shouldn't be expensive to operate
Also, when I use Cursor I have to watch it like a hawk or it deletes random bits of code that are needed or adds in extra code to repair imaginary issues. A good example was that I used it to write a function that inverted the axis on some data that I wanted to present differently, and then added that call into one of the functions generating the data I needed.
Of course, somewhere in the pipeline it added the call into every data generating function. Cue a very confused 20 minutes a week later when I was re-running some experiments.
and the 1 dollar cost for your case is heavily subsidized, that price won't hold up long assuming the computing power stays the same.
For $1 I'm talking about Claude Opus 4. I doubt it's subsidized - it's already much more expensive than the open models.
Personally, until models comparable with sonnet 3.5 can be run locally on mid range setup, people need to wary that the price of LLM can skyrocket
And obviously consumer hardware is already being more optimized for running models locally.
There is a load-bearing “basically” in this statement about the chat bots that just told me that the number of dogs granted forklift certification in 2023 is 8,472.
I'm unhappy every time I look in my inbox, as it's a constant reminder there are people (increasingly, scripts and LLMs!) prepared to straight-up lie to me if it means they can take my money or get me to click on a link that's a trap.
Are you anthropomorphizing that, too? You're not gonna last a day.
Total exaggeration—especially given Cloudflare providing free tools to block AI and now tools to charge bots for access to information.