We know a few things: LLMs are not efficient, LLMs are consuming more water than traditional compute, we know the providers know but they haven't shared any tangible metrics, and the build process involves, also, an exceptional amount of time, wattage and water.
For me it's: if you have access to a supercomputer do you use it to tell you a joke or work on a life saving medicine?
We didn't have these tools 5 years ago. 5 years ago you dealt with said "drudgery". On the other hand you then say it can't do "most things I do". It seems as though the lines of fatalism and paradox are in full force for a lot of the arguments around AI.
I think the real kicker for me this week (and it changes week-over-week, which is at least entertaining) is when Paul Graham told his Twitter feed [1] a "hotshot" programmer is writing 10k LOC that are not "bug-filled crap" in 12 hours. That's 14 LOC per minute. Compared to industry norms of 50-150 LOC per 8 hour day. Apparently,this "hot-shot" is not "naive", though, implying that it's most definitely legit.
[0] https://www.sciencenews.org/article/ai-energy-carbon-emissio... [1] https://x.com/paulg/status/1953289830982664236