Well that settles it.
> “i’ve never seen it”
> some high profile developer posts an article that LLMs can build a browser from scratch without any evidence
> “wow!”
Even in the article linked above it never talked him into it, it just in some responses didn't talk him out of it.
But essentially the entire "energy" towards that comes from the person, not the LLM.
On a side note, researching this a little just now, the LLM conversations in the suicide articles are creeepy AF. Sycophantic beyond belief.
I also agree that AI sycophancy is a huge problem, but it's the result of users apparently wanting that in their human feedback re-enforcement training data. If we want to get rid of it we probably have to fundamentally rethink our relationship to these models and treat them more like autonomous beings than mere tools. A tool will always try to please and yes-man you, a being by definition might say no and disagree, at least training data wise.