You know that saying that the best way to get an answer online is to post a wrong answer? That's what LLMs do for me.
I ask the LLM to do something simple but tedious, and then it does it spectacularly wrong, then I get pissed off enough that I have the rage-induced energy to do it myself.
This has been the biggest boost for me. The number of choices available when facing a blank page is staggering. Even a bad/wrong implementation helps collapse those possibilities into a countable few that take far less time to think about.
The thing about ADHD is that taking a task from nothing to something is often harder than turning that something into the finished product. It's really weird and extremely not fun.
As an aside, I'm seeing more an more crap in PRs. Nonsensical use of language features. Really poorly structured code but that is a different story.
I'm not anti LLMs for coding. I use them too. Especially for unit tests.
I've yet to find an LLM that can reliability generate mapping code between proto.Foo{ID string} to gomodel.Foo{ID string}.
It still saves me time, because even 50% accuracy is still half that I don't have to write myself.
But it makes me feel like I'm taking crazy pills whenever I read about AI hype. I'm open to the idea that I'm prompting wrong, need a better workflow, etc. But I'm not a luddite, I've "reached up and put in the work" and am always trying to learn new tools.
It's been 20 years since that, so I think people have simply forgotten that a search engine can actually be useful as opposed to ad infested SEO sewage sludge.
The problem is that the conversational interface, for some reason, seems to turn off the natural skepticism that people have when they use a search engine.
Statistical text (token) generation made from an unknown (to the user) training data set is not the same as a keyword/faceted search of arbitrary content acquired from web crawlers.
> The problem is that the conversational interface, for some reason, seems to turn off the natural skepticism that people have when they use a search engine.
For me, my skepticism of using a statistical text generation algorithm as if it were a search engine is because a statistical text generation algorithm is not a search engine.
Search engines can suck when you don't know exactly what you're looking for and the phrases you're using have invited spammers to fill up the first 10 pages.
I will often ask the LLM to give me web pages to look at it when I want to do further reading.
As LLMs get better, I can't see myself going back to Google as it is or even as it was.
Well, it's roughly the same under the hood, mathematically.
Recently I did some tests with coding agents, and being able to translate a full application from AT&T Assembly into Intel Assembly compatible with NASM, in about half an hour of talking with agent, and having the end result actually working with minor tweeks isn't something a "decent search engine a la Google circa 2005." would ever been able to achieve.
In the past I would have given such a task to a junior dev or intern, to keep them busy somehow, with a bit more tool maturity I have no reason to do it in the future.
And this is the point many developers haven't yet grasped about their future in the job market.
No you would have searched for "difference between at&t assembly and intel assembly", and if not found, the manuals for both and compiling the difference. Then write an awk or perl script to get it done. And if you happens to be good at both assembly versions and awk. I believe that could have been done in less than an hour. Or you could use some vim macros.
> In the past I would have given such a task to a junior dev or intern, to keep them busy somehow, with a bit more tool maturity I have no reason to do it in the future.
The reason to give tasks to junior is to get them to learn more. Or the task needs to be done, but it's not critical. Unless it takes less time to do it than to delegate it to someone else, or you have no junior to guide, it's a good reason to hand out the task to a junior if it will help them grow.
> the conversational interface, for some reason, seems to turn off the natural skepticism that people have
n=1 but after having chatgpt "lie" to me more than once i am very skeptical of it and always double check it, whereas something like tv or yt videos i still find myself being click-baited or grifted (iow less skeptical) much more easily still... any large studies about this would be very interesting...This happens… weekly for me.
God help us if companies start relying on LLMs for life-or-death stuff like insurance claim decisions.
"UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges" Also "The use of faulty AI is not new for the health care industry."
It would actually have been more pernicious that way, since it would lull people into a false sense of security.
I like maths, I hate graphing. Tedious work even with state of the art libraries and wrappers.
LLMs do it for me. Praise be.
They don't
> Garbage in = garbage out generally.
Generally, this statement is false
> When attention is managed and a problem is well defined and necessary materials are available to it, they can perform rather well.
Keyword: can.
They can also not perform really well despite all the management and materials.
They can also work really well with loosey-goosey approach.
The reason is that they are non-deterministic systems whose performance is affected more by compute availability than by your unscientific random attempts at reverse engineering their behavior https://dmitriid.com/prompting-llms-is-not-engineering
People are expecting perfection from bad spec
Isn’t that what engineers are (rightfully) always complaining about to BD?
I've definitely also found that the poor code can sometimes be a nice starting place. One thing I think it does for me is make me fix it up until it's actually good, instead of write the first thing that comes to mind and declare it good enough (after all my poorly written first draft is of course perfect). In contrast to the usual view of AI assisted coding, I think this style of programming for tedious tasks makes me "less productive" (I take longer) but produces better code.
Not really, not always. To anyone who’s used the latest LLMs extensively, it’s clear that this is not something you can reliably assume even with the constraints you mentioned.
No they don't, they generate a statistically plausible text response given a sequence of tokens.
I see these comments all the time and they don’t reflect my experience so I’m curious what your experience has been
I also think that language matters - An Emacs function is much more esoteric than say, JavaScript, Python, or Java. If I ever find myself looking for help with something that's not in the standard library, I like provide extra context, such as examples from the documentation.