I guess LLMs can learn directly from documentation websites. But there is no human feedback on correct solutions or better ways to do things that SO provides.
It seems unlikely that any significant number of developers quit using SO in favor of blindly trusting ChatGPT.
More likely - advertising revenue declined in line with the reduced ad spending across the industry, largely due to expectations of economic recession by ad buyers.
Another nice thing is that SO is, ironically, often relatively hostile to asking questions. It seems like every post there's some group of people racing, often recklessly, to try to mark things as duplicates when they're not, asking for code samples even when the question clearly does not require it, etc. And then there's the passive aggressive stuff. It seems like 95% of the SO community is awesome, but that 5% sure is a turn-off from the site. GPT cuts through that layer of nonsense.
While some users use ChatGPT to solve technical problems, it doesn’t seem likely that the search traffic they depend upon is an extreme outlier from search in general.
We do know that SO’s primary ad service is that of tech hiring, something that has cooled significantly in the last year.
For me, ChatGPT has unlocked an entire class of questions and investigations that would be almost impossible to handle with just search.
Most people I know have done exactly that.
I did to a large extent. If verification is easy(which in most of cases it is), I first ask the question to GPT 4 before searching SO. And tbh I have found SO top answer to be more probability of being wrong than GPT 4. In most of the code, if the code doesn't look wrong and could pass the testcases, which I use GPT 4 to generate and verify manually, it is likely fine.
Also there is a huge difference between GPT-3.5 and GPT-4 so if you have only tried GPT-3.5, you experience will differ wildly.
Man, maybe it's just me, but I have switched almost exclusively from Google searches and Stack Overflow to just asking ChatGPT. I've been doing dev for 20+ years and GPT4 is like switching from regular tools to power tools. Granted, it's only useful for things that have been done before and are well documented, but most professional software development is exactly that.
For the simpler questions it is quite ok. As in write me x in language y where those are inputs
SO is left only for really hard or niche one.
Stack Overflow announces 28% headcount reduction (https://news.ycombinator.com/item?id=37898199) (118 points | 2 days ago | 171 comments)
Oh, this is so wrong. Code is not instantly verified by any IDE except for things like syntax errors and compiler warnings. SO critique comments under each answer are so more valuable in that sense, it's not even funny. Every time I give feedback on ChatGPT's proposed answers it tends to go into incredible rabbit holes and endless recursion, reminiscent of WOPR playing tic tac toe.
I used to relay on GitHub issues but github.com way too slow and quality of search is abysmal.
So more and more when in doubts I just read source code. Seems fastest.