Imagine if the members of Fight Club _actually_ didn't (start to) talk about Fight Club. But for some reason, everyone else brings it up all the time.
Then you could maybe see how talk about Fight Club might not be Fight Club's fault, and in fact highly annoying to Fight Clubbers.
I mean, if you can explain "don't talk about X" as "marketing for X", that seems like one could explain _any_ behavior.
And before you say "why not just ignore all public talk of X", imagine if this proposed Anti-Fight Club group tried to paint Fight Club as a child porn ring.
I'm not an LW hater. For a long time, I didn't really have an opinion on LW both as a community nor as a philosophical framework. I do consider myself a transhumanist, though. There are three concepts I do know from and about LW: their take on rationality, the top secret AI unboxing strategy, and the Basilisk.
I have a very poor opinion of the concept of the Basilisk (and yes, as someone pointed out, that opinion is basically the same as the one I have about Pascal's wager) - a concept that has been given additional, undeserved credibility by the reactions of Yudkowsky and LW.
As for the AI escape chat, it's a social experiment. People can be talked into making mistakes, or at least making risky judgement calls, whether they operate on a rational framework or not. I have no problem with that thesis. What I object to is the "magic trick" aura surrounding this experiment, including the insinuation that at the core there is an argument so profound and unique and potent, it cannot be allowed to escape Yudkowsky's head. Oh, and by the way, the trick can never be repeated, but all you laymen out there are welcome to devise your own version at home. This whole thing comes across as humongously self-important: there is a secret truth that has been privately revealed to our leader.
To me, and I recognize I may well be alone with this opinion, the more rational assumption is there is no such magical argument at all, and the prime reason for not publicizing it is to prevent it from deflation by public critique, in the same way the inventor of a perpetuum mobile device will keep the inner workings of his contraption a closely held secret because ultimately the device doesn't exist as stated. The amazing part of this very old trick is that, even in 2015, it still works on otherwise smart people.
I get that my opinions on both the Basilisk and the AI Chat are extreme outliers, and to my knowledge I have never met anyone who shares them - it would probably have been advisable to keep them to myself, but honestly I wanted to see if like-minded people exist.
For the record, EY agrees with you and says he mishandled the original comment. Also for the record, the reasons why the Basilisk does not work are _not trivial_ - it's not a simple Pascal's Wager, because with Pascal's Wager, we don't have the ability to actually create God.
> I have no problem with that thesis. What I object to is the "magic trick" aura surrounding this experiment, including the insinuation that at the core there is an argument so profound and unique and potent, it cannot be allowed to escape Yudkowsky's head.
Personally I never got that impression. My idea, from looking at the psychological state of Gatekeepers and AIs after games, was always that playing as AI involved some profoundly unpleasant states of mind, and that not publicizing the logs probably comes down to embarrassment a lot.
For the record, Eliezer never claimed to have "one true argument", and in fact publically stated that he won "the hard way", without a one-size-fits-all approach. A lot of the mythology you claim is utterly independent of LessWrong.
> Oh, and by the way, the trick can never be repeated, but all you laymen out there are welcome to devise your own version at home.
It probably helps that I've met other AI players, and their post-game state matched EY's.
I think in summary you're mixing up stuff you've read on LessWrong and stuff you've read about LessWrong. The latter is often inaccurate.
For the AI to be infallible, it needs to have a network of tricks and arguments and each one of them had to be created for a very particular person, even if later it can be repeated on others.
It is like Christianity. There's not a simple belief that everyone accepts and that's it. There's only the façade of simplicity, but in reality it is anything but simple.
There's a network of related explanations and rationalizations that has been expanding for centuries, and every time someone appears who will not accept the network of arguments, a new argument or explanation has to be added to account for that person.
For example, if Christianity had not need to deal with Gnosticism, or with Arianism, then the network of beliefs and explanations would have been a noticeable different one.