Important to note that it is also possible for a technology to have more potential for abuse than good.
Sure, you can come to a philosophical place where nothing is good or bad, or the good is perfectly balanced by the bad, but if we're looking at increasing freedom, peace and trust, it's hard to see how the upside of this tech is equivalent to it's potential for abuse.
The best argument might be that eventually no one will trust video.
As such, tools like this just prove the conclusion: No one should trust video.
Which is why it is crucial everyone realizes how easy it really is to create these fakes, before the masses are duped in favor of the next war or genocide by these techniques.
Obama talked about it this afternoon. He said "This is bad, blah blah oh no." Of course, you don't believe me because I made this up. That doesn't preclude you from believing written quotes, given the right chain of trust. It's been great to have formats like video that didn't require the chain of trust for a while, but if that time has passed, there's nothing we can do. It is hard, but in the context of text where quotes have been easy to fake for ages, we have dealt with it. It's good for everyone to be on the same page.
This is never the case independent of context.
You can imagine in a major famine that people may start killing each other over scraps of food. In that context a kitchen knife becomes more likely to be used as a murder weapon than to prepare the food that nobody actually has.
But nothing about the knife has changed, it's the context that has changed. And you don't solve the problem by banning cutlery and every other thing with a point or some heft, you solve it by resolving the famine.
You don't solve deepfakes by restricting information, you do it by adapting to their existence. Because they're not going away.
Github has an infamous history with imposing their feelings on projects they don't like.
Can you elaborate on this part? I don't remember seeing something like this before.
[0] - https://github.com/FeministSoftwareFoundation/C-plus-Equalit...
[1] - https://github.com/TheFeministSoftwareFoundation/C-plus-Equa...
https://www.techdirt.com/articles/20150802/20330431831/githu...
There's lots more examples of their employees getting triggered and offended by various things and then arbitrarily banning or censoring projects.
There's probably a word for this sentiment that I'm not aware of.
What this might usher in is the era of cryptographically signed news articles. Not just credibility but verifiability. Blocking
Actually, how about cryptographically signing videos as they get written on the recording device?
Maybe there even are ways to sign data so that the integrity can get validated on shorter segments, so that clips can be cut. Write a signature every 5 seconds for the past 5 seconds?
Edit: This exists and the term for it is 'video authentication'.
Huh, I'd never even considered that you could do that.
https://www.reddit.com/r/github/comments/99aovq/unable_to_ac...
> We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.
Excerpt from the recent OpenAI blogpost about GPT2 text models. It seems valid since giving the code or probably a web app can make anyone easily create malicious intent content online
I feel these tools are worth having on their own and it seems widely accepted at this point that the tools themselves aren't at fault for their user's actions, even if those actions are the most popular use of the tools.
Personally I'm much more concerned about the ethical actions of internet advertisers and social media giants - those who are making direct ethical decisions that impact their users privacy and access to information.
As far as I know, at least Hex-Rays screens customers very carefully before selling IDA Pro.
These technologies are to the detriment of enforcement. Because enforcement is far from a universal good, these technologies are far from a universal bad. Contrast this with faceswapping, where the upside is far less clear. Same goes for e.g. Stuxnet. It is a beautiful piece of technology, but not really a force for good given that it is widely available.
A bio-weapon delivery vessel might be a great essential oil diffuser, but I'd argue that the tool still has an essential immoral quality by virtue of its specialised design.
Does that apply here? I don't think so, I don't think this tech was created for the purpose of fomenting unrest and committing frauds, but maybe I'll be corrected on that.
It seems like an extremely bad idea to me.
Clearly it is a dangerous tool that must be restricted to select users.
The morality and behavior come from the humans that use it.
Censorship: bad.
That Life: The tech will get created by someone else so censoring does almost nothing. Better to put it out there so we can try to make defenses. Maybe make a bunch of fakes with famous people's permssion to spread the word how you can't trust video anymore
Dangerous: To take an extreme example imagine you figured out how to make some kind of E=MC^2 bomb simply such that anyone with the knowledge could make a device that could blow up a city for $100 and a few hours of time. Would it be ok to upload those instructions to the internet for any disgruntled teen to repo?
deepfakes are certainly not at that extreme but we can also clearly imagine the harm they could do as they progress.
There have been several examples recently of people seeming to react to arguably false perceptions. I'm actually thinking of ones in the last 2-3 days but I'm sure there are plenty of others.
- Community creates a project that makes it impossible to track faces in social media and anywhere online.
yghmmm, no, not that kind of AI
If they're not and they're hoarded by tech companies or intelligence agencies then we'll just have a lopsided system where people aren't aware of how capable such technologies are, what their limitations are, how to analyze them to spot issues, etc.
Imagine if only nationstates knew about these sorts of technologies and used them for war or if only certain elites in tech had access to them and used them to implicate competitors in crimes? The technology is out there now - at this point, public knowledge is our best defense - people always question if a contentious image is photoshopped, we want that same level of questioning to happen for videos.
In terms of this being used as an excuse to get someone out a criminal charge, it might make us take a better look at the chain of custody on video evidence but I don't think it would invalidate it completely.
This might seem nice against ever growing CCTV, but probably state security cameras will be "trustworthy" and all media evidence gathered by private persons will be dismissed...
The potential for manipulation is huge given how many people trust pictures. I know that I don't distrust most pictures I see.
What GP is describing is the long term consequence of not being able to trust video evidence. Now even if you film someone red handed, they can deny it.
Another dire consequence is that the entire archive of all videos filmed since the beginning are now tainted by doubt. Any past politician speech, any past horror caught on film, etc. can now be said to have been crafted recently.
Possibly it was only enforced for very generic search queries returning thousands of results but it has been around a long time, Github acquisition was only in October 2018.
!gh or !git anywhere in a ddg search will restrict it to github.
Censoring will just draw more attention and traffic. What’s really unsettling is that GitHub is playing politics with its users, without even informing them or communicating with them. You would think they would have the courtesy to tell the owner.
I can show verifiable, witnessed audio recordings of a guy saying he likes to grab women by the pussy, but that won't stop that guy from becoming President. Powerful tools don't run societies, people do.
P.S. and yes, before the obligatory "it's a private business" comments come in, I know I can build my own Internet and avoid all this. Thanks for reminding.
1) One day somebody posts a handful of really obviously faked janky looking porn videos. We all have a good laugh, briefly imagine the possibilities, and then move on
2) Like 3 weeks later, every social media platform explicitly bans this dumb toy that wasn't even any good
3) a year or so passes
4) Now governments are passing dramatic legal bans on these things, and there's all kinds of shady things happening. Like, this is the first instance of this kind of public restriction I have _ever_ seen on github.
So: which major news events were completed fabricated?
Notice how that says "Application", not website. It amazes me how people want to make their WordPress site into an SPA simply because someone told them to do so or it was the next "hip" thing to do.
SPA have their place... migrating a desktop application to the web and making it a SPA makes perfect sense to me.
While I agree technology isn't inherently good or evil, this feels more harmful than helpful.
Why not change the license to enforce the use restictions?
When I think about people I know who have been long-time users of GitHub and how this kind of censorship resonates with them... Oh my.
These early adopters could migrate away very quickly.
I have no opinion about whether or not that is a better title, but I thought it should be known that it was modified from its original.
While censorship may not be an appropriate word, this is weird. Why would Github do something like that, except to force people accessing the repo to leave a trail leading to their PII?
Anyone can fork and mirror it where they want, and make it accessible to anonymous users. Sure, that would "inconvenience" some users, but so what? Github doesn't exist to please every single person out there.
Create your own mirror, and let us know the URL. Don't just whine and try to manufacture outrage if you aren't willing to do contribute resources required to host the code yourself.
I fully support Github's right to use their property (github.com) as they please, because I want the same right for myself.
— definitely Voltaire, for sure. /s
Works with clone though. Wonder how many more such repos exist?
Do they have a transparency report which includes such action?
As a Microsoft employee, it would be even more enormously disappointing if this were a top down rather than internal org decision.