You're absolutely right. You need to verify the script works, and you need to be able to read the code to see what it's actually doing and if it passes the smell test (as a sibling commenter said, the same way you would for a code snippet off StackOverflow). But ultimately for these bits which are largely rote "take data from API, transform into data format X" tasks, LLMs do a great job getting at least 95% of the way there, in my experience. In a lot of ways they're the perfect job for LLMs: most of the work is just typing (as in, pressing buttons on a keyboard) and passing the right arguments to an API, so why not outsource that to an LLM and verify the output?
The challenge comes when dealing with larger systems. Like an LLM might suggest Library A for accomplishing a task, but if your codebase already has Library B for that already, or maybe Library A but a version from 2020 with a different API, you need to make judgment calls about the right approach to take, and the LLM can't help you there. Same with code style, architecture, how future-proof-but-possibly-YAGNI you want your design to be, etc.
I don't think "vibe coding" or making large changes across big code bases really works (or will ever really work), but I do think LLMs are useful for isolated tasks and it's a mistake to totally dismiss them.