I have my own automated LLM developer system. You simply give it a project description and it will iterate until it thinks it perfectly implemented the project description. I still take credit for what this system produces because I made the automated developer system in the first place and gave it the project ideas.
LLMs do not have autonomy. If you setup a self-prompting script with internet access and it figures out how to post something to social media, that still isn't the LLM's own inherent curiosity. That's a person who intentionally put together a script that gives an LLM access to tools and poorly defined instructions.
The instruction tuning that makes LLMs reply like they're a person you're talking to, rather than acting like autocomplete, is doing a lot of heavy lifting into personifying a pile of matrices.
Well the person obviously, the person who created and implemented the agent, algorithm, program or other system should be blamed if things go wrong. By the same token if things go well then they should get the credit.