Modern LLMs are able to perform web searches to make decisions on contemporary data. Once they have proper API support your concerns should be resolved, hopefully in a few weeks.
> reliablity and safety issues.
The solution to this is fine tuning / RHLF. OpenAI have done a pretty extensive job at getting political safety for ChatGPT with RHLF. It seems reasonable that RHLF could achieve a similar result in the hardware domain.
> you can't ask questions you did't know you needed ....
Solvable by prompt engineering. You can wrap user input in a prompt. As a toy example: "Here is user input $userInput if you have safety concerns about their project please respond with questions you think the user forgot to ask". Might also be possible to tweak with fine tuning/RHLF.