For the record, EY agrees with you and says he mishandled the original comment. Also for the record, the reasons why the Basilisk does not work are _not trivial_ - it's not a simple Pascal's Wager, because with Pascal's Wager, we don't have the ability to actually create God.
> I have no problem with that thesis. What I object to is the "magic trick" aura surrounding this experiment, including the insinuation that at the core there is an argument so profound and unique and potent, it cannot be allowed to escape Yudkowsky's head.
Personally I never got that impression. My idea, from looking at the psychological state of Gatekeepers and AIs after games, was always that playing as AI involved some profoundly unpleasant states of mind, and that not publicizing the logs probably comes down to embarrassment a lot.
For the record, Eliezer never claimed to have "one true argument", and in fact publically stated that he won "the hard way", without a one-size-fits-all approach. A lot of the mythology you claim is utterly independent of LessWrong.
> Oh, and by the way, the trick can never be repeated, but all you laymen out there are welcome to devise your own version at home.
It probably helps that I've met other AI players, and their post-game state matched EY's.
I think in summary you're mixing up stuff you've read on LessWrong and stuff you've read about LessWrong. The latter is often inaccurate.