They may be a converse of the Scissor Statement, which has a dual meaning that is irreconcilable between the separate interpreters. (https://news.ycombinator.com/item?id=21190508)
"You can't change the people around you -
But you can change the people around you."
Whereas in the example here, acting on that advice is costly (it means losing friends) but believing it is free. And there aren't different layers of meaning accessible to different parties. It's straightforwardly a play on words.
Who are these dudes?
Top right in this picture: https://pbs.twimg.com/media/GgTm194WIAEqak3?format=jpg&name=...
There are in-person meetups (primarily as a social group) in most large cities. At the meetups, there is no expectation that people have read the website, and these days you're more likely to encounter discussion of the Astral Codex Ten blog than of LessWrong itself. The website is run by a non-profit called LightCone Infrastructure that also operates a campus in Berkeley [2] that is the closest thing to a physical hub of the community.
The community is called "rationalists", and they all hate that name but it's too late to change it. The joke definition of a rationalist is by induction: Eliezer Yudkowsky is the base case for a rationalist, and then anyone who disagrees online with a rationalist is a rationalist.
There are two parallel communities. The first is called "sneer club", and they've bonded into a community over hating and mocking rationalists online. It's not a use of time or emotional energy that makes sense to me, but I guess it's harmless. The second is called "post-rationalism", and they've bonded about being interested in the same topics that rationalists are interested in, but without a desire to be rational about those topics. They're the most normie of the bunch, but at the same time they've also been a fertile source of weird small cults.
[1] https://en.wikipedia.org/wiki/LessWrong [2] https://www.lighthaven.space/
Edit: Not sure why I was being coy. I'm talking about the Claremont Institute.
In the Dawkins sense, if the Dad’s use of the Santa myth makes the child feel happy, and preserves in some sense their innocence (ignorance of the world the way it really is), then the mother can recreate the same myth pattern elsewhere, most likely through family traditions.
Or in a semiotics of Ecco, the parents are overcoding Santa and the child is undercoding Santa—same expressions but different interpretations between the two groups. Maybe childhood lives in that gap.
In my head I think of it has just really high linguistic compression. Minus intent, it is just superimposing multiple true statements into a small set of glyphs/phonemes.
Its always really context sensitive. Context is the shared dictionary of linguistic compression, and you need to hijack it to get more meanings out of words.
Places to get more compression in:
- Ambiguity of subject/object with vague pronouns (and membership in plural pronouns)
- Ambiguity of English word-meaning collisions
- Lack of specificity in word choice.
- Ambiguity of emphasis in written language or delivery. They can come out a bit flat verbally.
A group people in a situation:
- A is ill
- B poisoned A
- C is horrified about the situation but too afraid to say anything
- D thinks A is faking it.
- E is just really cool
"They really are sick" is uttered by an observer and we don't know how much of the above they have insight into.
I just get a kick out of finding statements like this for fun in my life. Doing it with intent is more complicated.
What the author describes seems more like strategic ambiguity but slightly more specific. I don't think it is a useful label they try to cast here.
But I like the idea there is a term for this, be it Straussian Memes or something else. What I didn't quite get is how "self-stabilizing" works?
What I'd like is for TV-anchors to get wise and start asking their interviewees "What EXACTLY do you mean when you use this term ...". But I guess they won't because they too are happy to spread a meme which multiple different communities can like because they understand it in the way they like.
This is the core rhetorical tactic of the progressive left in a nutshell. Linguistic superposition, equivocation, Schrodinger's definition - whatever you want to call it, it's the ability to have your cake and eat it too by simply changing your definitions, or even someone else's, post hoc.
Let us take a moment to be reminded of the English Socialism of Orwell and doublespeak.
I live in Wyoming and have MAGA and ultra-progressive friends.
Multiple messaging is a hallmark of all elites. Sometimes it’s functional: being able to say something sharp that if repeated is ambiguous is a skill. Anyone who has any power or authority wields it. It is so common to suggest requirement. (Other times, multiple messaging lets one apologise in a public setting without making things awkward.)
In many respects, it’s an essential feature of commanding language. Compressing multiple meanings into fewer words is the essence of poetry and literature.
I suspect that the use of incredibly bad examples is some sort of intentional Straussian joke, and that the entire article itself, and not the examples in it, is supposed to be the real example of a Straussian meme.
And maybe that's the higher reading.
The article itself is an example of something that overlaps to some extent with its subject without being an example of the subject, like all the examples in it. It's an intriguing idea, like "things you can't say" but without examples it falls flat but that won't bother the rationalists anymore than they are bothered by Aella's "experiments" or allegedly profound fanfics or adding different people's utility functions or reasoning about the future without discounting. It's a hugbox.
Or maybe it is something they can't find any examples of it because humans can't make them, only hypothetical superhuman AI.
That said, I'm not impressed with the notion of Straussian memes and agree that way better examples are needed to give the idea some validity.