sounds very cool
now that I think about it, LLMs are so useless for the security code. You can't even show an LLM a code which your wrote and ask it to break it, it will reply with something like "hacking is a big no-no around here"
I asked ChatGPT for an ansible playbook to wipe the hard drives with zeros completely (I know the dd command to achieve it, I was just curious what approach would it advise). ChatGPT replied with a firm "no" to this request, I've canceled my subscription after that