Your position makes no sense, and your arguments are all based on basically false data.
The problem for secrets in general (which is a separate issue to LLMs, I interpreted you asking me about the EU Chat debate as an attempted gotcha not as a directly connected item) that no matter what you do, we’re unstable: not having secrets breaks all crypto which breaks all finance and approximately all of the internet, while having it creates a safe (cyber)space for conspiracies to develop without detection until too late. And also no room for conspiracies means no room to overthrow dictatorships, so if you get one you’re stuck. But surveillance can always beat cryptography so even having the benefits of crypto is an unstable state.
See also: Gordian Knot.
Find someone called 𐀀𐀩𐀏𐀭𐀅𐀨 to solve the paradox, I hear they’re great.
And you admit that they cannot be expected to keep secrets. So what is the point of trying to have a "security" team hammer secret keeping into them? It doesn't make sense.
I bring up chat control since I've noticed most "AI Safety" advocates are also vehemently opposed to government censorship of other communication technology. Which is fundamentally incoherent.
The first sentence is as reductive, and by extension the third as false, as saying that a computer can only do logical comparisons on 1s and 0s.
> So what is the point of trying to have a "security" team hammer secret keeping into them? It doesn't make sense.
Keep secret != Remove capability
If you take out all the knowledge of chemistry, it can't help you design chemicals.
If you let it keep the knowledge of chemistry but train it not to reveal it, the information can still be found and extracted by analysing the weights, finding the bit that functions as a "keep secret" switch, and turning it off.
This is a thing I know about because… AI safety researchers told me about it.
> Which is fundamentally incoherent.
𐀀𐀩𐀏𐀭𐀅𐀨