Honestly I think that's an excellent idea - a rep "passport" of sorts which gains you a certain level of trust within certain communities.
> Assuming they're making this move to protect against AI / LLMs, I think SO is in an impossible situation here. When all the ChatGPT hype started, one of my first questions was "what happens to the incentive for contributors and creators?" Why would I want to contribute on a platform if I know an AI model is going to come in, take my contribution, and regurgitate it back to the masses in a way that I can't control?
Sadly, I think this is an unpreventable outcome of what is happening right now. I don't think anyone will have any control over this, at all. We can only hope it will never be the case that being active (actual human contributors) becomes a worthless pursuit.
> Even if I get some attribution from the AI/LLM, do I even want it? If the LLM is blending content from multiple sources, which changes the context and presentation I put effort into, is the quality going to be high enough to match what I strive to achieve for myself when I'm trying to build a reputation as a high quality contributor? What if the AI is hallucinating objectively poor quality content and giving me partial attribution?
Another excellent point, the prospect of this being possible today - AI attribution from a hallucinated version of a human's objective contribution sounds freaking terrifying to me. Not a world I want to live in, to be honest.
> I think AI is going to be disruptive and the whole idea, for me anyway, behind disruption is that you break an existing system and then everyone is free to take a shot at claiming part of the new gold rush that occurs while trying to build the replacement. The problem with AI is that it's going to break a lot of services that do a good job of serving the community and shouldn't be broken. SO is a great example of a healthy community that doesn't need disruption, but the massive amount of high quality, curated content is going to make them a prime target for LLM training.
As will every single human-created/curated content-source, IMHO. I think that "quality" will be really, really hard to objectively measure in the near future as the whole world of digital information becomes tainted with applied statistical models which can do a reasonably good job of predicting what people perceive to be high-quality reasoning, answers, content. I like the idea of underground speakeasies where there's no wifi, just humans.
> Personally I think the only solution is for "noai" variants of popular open source licenses so contributors have the ability to make it clear they don't want to contribute to AI/LLM companies. If SO had an option to flag contributions as CC-BY-SA-NOAI, I'd enable it on my stuff going forward.
That would be great, but I'm pretty sure that no LLM corporation would care about those flags, even with strict regulations in place from governments.