Unfortunately it looks like the Twitter event feed is temporarily down (they're blocking it, possibly as part of shutting down the API on Thursday). I have a cache of events from a little earlier that I'm going to try play through the stream.
Proud of what we built at Smyte, and hoping it can find another live outside of Twitter </3. I know there are already a couple of implementations based on SQRL, one at Discord and another at Sumatra.ai[1]
Ended up rewriting it in house. We ended up with a SQRL variant that is a restricted type-checked subset of python. It is not python however, just syntactically similar.
I assume from the text on the page that it is supposed to be showing tweets and some kind of spam rule classification or something.
The Wikipedia Recent Changes demo https://websqrl.vercel.app/wikipedia however did show one element on the right side, with an article title, IPv6 address, timestamp, a piece of quoted text, and "Rules fired". The rule was "FirstEventSeen".
Looking at the code shown for the Wikipedia example, the demo rules are:
- Simple rule to make sure atleast one event shows up in the UI
- Flag any users using profanity (not a great spam rule! but easy)
So I suppose that with a bit more time some events might show up matching the second rule as well.
Meanwhile though, https://www.mediawiki.org/wiki/API:Recent_changes_stream links to https://codepen.io/ottomata/pen/VKNyEw/ which has events flying across the screen in the hundreds. Would be neat with a demo of SQRL also having many events fly by like that, and perhaps with a pause button in case one wanted to stop it and have a look at some of the events.
Perhaps the Twitter demo example works like that when it works.
I see also that the SQRL code in the Twitter demo is also has a rule that is meant to ensure that at least one tweet shows up.
Definitely either something is currently broken, or it is connecting directly to Twitter from the browser perhaps, and Twitter is not letting my browser get any data from their API?
In the meantime the Wikipedia demo should be working, although it is far less interesting (much less data so not much spam popping up.)
I thought that was a "known issue," in pursuit of repaying a $44B loan
if new follower has <=4 tweets.total and all(tweets) === type=image
blockremember that post a month back (and of course all other recently discusseds) for stylometry, to use writing style to identify authorship? would it work to require posters to expose a unique "voice" in their posts, and where banning of your style would definitely be something you'd want to avoid.
This is a thought experiment to talk about. Yes, there are obvious criticisms, but would it also produce something "we can have nice things" useful?
obvious criticisms are sort of like "what about the children!?" concern trolls (this is meant to be hyerbolic and biased): "what about PopulationX, the half literate know-nothing n00bs who are barely surviving under repressive regimes, they may all sound alike, but we can't silence their voices!" Which I agree with, we don't want to silence their voices. But again, this is just a thought experiment, I bet all the super high quality well informed thoughtful posters would in fact sail through the system. So the question is, does PopulationX actually all style-sound alike? (where population X is the concern you raise, not the fake one i just made up) maybe they are just as distinctive and the system would actually work for them too.
and the criticism "this is not going to work because it's just spam filters with more steps and a minus sign" is not an adequate criticism because (a) spam filters do exist, and are deployed, and do take care of a bunch of spam. and (b) yes, what I'm trial ballooning here is similar, it would just distribute some of the work of maintaining the tasty spam rules to good posters who might get some sort of "you can't post hot grits here, naked or petrified" warning, more frequently then they currently do, hopefully in some way that would be useful for the community at large.
"Hey, GPT! Can you please take the following ad for eyeglasses and rewrite it in the style of fsckboy so we can assign any blame in the spam filters to his account?"
"Hey, GPT! You helped some spammer get the unique way I talk banned by spam filters; can you rewrite everything I say from now on to sound like a British chef?"
I don't think that's going to happen, but if it will, it hasn't yet. But please do me the favor of responding only in my style so I'll know it's really you.
If you're just replaying events, maybe select a few tamer examples to replay...
Version I'm pushing up right now has a
LET ProfanityFilterEnabled := false;
that you'll need to tweak to turn it on.With hand written (not arbitrary) rules it's easier to understand the intent of the attacker and build a system that they can't work around because we're blocking them at their source of income. Sure they can figure out how to post messages but unless they can include their link/payload/etc it's not worth their time.
Machine learning defences are definitely a part of what we did, but they're slower to respond to attacks and generally easier to work around.
I disagree; just pointed out how it's not hard to get pure spam by using the filtered stream rules. If I can reliably identify & filter for spam on my creaking desktop with limited compute power and technical/coding skills, I would be happy to operate a silicon backhoe for a modest fee.