In day to day work I could only trust it to help me with the most conventional problems that the average developer experiences in the "top N" most popular programming languages and frameworks, but I don't need help with that because search engines are faster and lead to more trustworthy results.
I turn to LLMs when I have a problem that I can't solve after at least 10 minutes of my own research, which probably means I've strayed off the beaten path a bit. This is where response quality goes down the drain. The LLM now succumbs to hallucinations and bad pattern-matching like disregarding important details, suggesting solutions to superficially similar problems, parroting inapplicable conventional wisdoms, and summarizing the top 5 google search results and calling it "deep research".