What they seem to be talking around is implementing an app-level CALEA-like capability.
What I think how they think it would work: companies would be made to build lawful targeted intercept capability into their apps, in the same way telephony and other equipment is today. The app developer receives a warrant for an identifier and they're required to split off that traffic and change the keys, or encrypt it twice (the sender/recipient key and an intercept key - one per warrant (this happens with some net and tele warrants now)).
We all know the downsides of this approach, but it isn't technically impossible. What would be impossible is enforcing it, as it is more a regulatory hurdle. It is more possible today because of vertically integrated walled gardens being used for most app distribution - and backed by two of the largest companies in the world who may be susceptible to a compromise (especially as there is the large tax issues hanging over both their heads).
On a scale of how bad things can get - I think warranted targeted surveillance is better than device backdoors which is better than metadata retention which is better than the mass surveillance we have today (leading to cable splitting and DPI, or situations like Lavabit)
I don't see how, even if you're ok with warranted targeted surveillance, how a compromise is made here that doesn't lead to a wack-a-mole game where legitimate users are inconvenienced while the 'bad guys' are pushed onto alternate Android distributions and unofficial apps.
I also don't see how a CALEA-like capability is kept secure and safe - especially with apps (we saw the NSA use CALEA intercept to surveil political targets). Clapper et al always vaguely answer "key escrow" to this question without spelling out how that would work.
With subsequents backdowns in the scope of what these governments are wanting to do (and this latest proposal is again is a minor backdown) we might be reaching the finite conclusive point where comms do go dark and the new reality is that despite all of the tech we have law enforcement mostly relies on human intelligence and they'll have to scale back up for that. 3,500 terror suspects in the UK, 4,000 employees at MI5 - and notably in the recent attacks there were HUMINT warnings.
Stored-program general purpose computers are a fundamentally a threat to any entrenched power that relies on being able to control any potential risk with physical, legal, economic, or social force. The only real way to control software that is no longer scarce is to find a way to hobble the "universal computing machine" so it is no longer universal.
Cory Doctorow's warning[1] about the War On General Purpose Computing received a lot of attention, but I suspect his far more important followup[2] about the looming Civil War over General Purpose Computing was had a much smaller audience. Dan Geer suggested[3] that this "Cold Civil War" has been ongoing for a long time already. With this new push by FVEY nations against crypto, it looks like the war is starting to heat up.
> where comms do go dark
That's just not true. Metadata is everywhere and will likely only get even more informative into the future. As Susan Landau explained[4] in her testimony to Congress, the only people "going dark" are the people trying to "preserve 20th century investigative techniques [while] our enemies are using 21st century technologies against us." Complaining about "going dark" is just misdirection away from a total failure to update investigative techniques to not just keep up with changing technology, but to take advantage of the new opportunities created by our growing sea of {,meta}data.
[1] http://boingboing.net/2012/01/10/lockdown.html
[2] http://boingboing.net/2012/08/23/civilwar.html
You don't really need much for targeted surveillance, right ? One only needs to tap into the distributor and push a specific trojan update.
Even without that Telegram and Signal already have vulnerabilities by tying key-pairs to phone numbers via OTPs. GSM is broken, ergo so are these. If these agencies wanted to do targeted surveillance there is very little in their way IMO.
The argument presented in the article is a specious one in that they use the premise of targeted surveillance for instituting the structures for mass on-demand targeting.
This is a very slippery slope, and as usual the morons that form our 'forth pillar' have let us down badly.
That doesn't rule out such silliness from happening, but it will be a tough fight, so for now they'll probably stick to going after the silos (e.g., WhatsApp) and simply get them to substitute true end-to-end encryption for some backdoored solution — it's easier and more effective now.
Many of the major telecom-ish apps that were not subject to interception added the capability later via regulation or circumstance. Nextel Direct Connect, Skype and the mysterious purchase by eBay, and FaceTime after the patent suit come to mind.
I wonder if the threat of the net going dark is really anywhere as bad as the intelligence agencies pretend, considering that there is more information available than ever before outside of those encrypted messaging apps.
Yes well, I don't. But hey – why not facilitate foreign actors spying on our companies so that we may or may not catch any terrorists?
This is a meme that is coming from the top. Expect to see this phrase a lot more in articles and from talking heads on the topic. They aren't even very subtle about it.
https://www.google.com/search?q=Encryption+%22rule+of+law%22...
1. The rule of law has not been compromised (snowdon has shown it has already)
2. warrants are issued by a proper judiciary (not the likes of FISA)
3. Oversight that protected citizens privacy rights (let's all laugh at this one since it requires Snowdon to show us that oversight just doesn't exist)
So, agree with you :)
- warrants should have explicit expiration dates...no more indefinite intercepts. - cooperating companies should be able to publicly say anything as soon as the warrant expires. The gag orders are what have allowed the scope of monitoring to be hidden from public scrutiny and that scrutiny is what's needed to keep surveillance to reasonable levels.
Forcing firms not to implement end-to-end encryption is forcing firms to implement flaws in their encryption software.
Which is why they're not pursuing it presumably.
> "At one point or more of that process, access to the encrypted communication is essential for intelligence and law enforcement," he said.
> "If there are encryption keys then those encryption keys have to be put at the disposal of the authorities."
The last part of the quote muddies the water a bit. Maybe they are interested in cooperation from companies with control of endpoint software (Apple, Google, Microsoft) to extract the keys?
Giving governments the power to perform mass interception and decryption of communication doesn't seem like a sensible way to fight terrorists, even if they say it's only to be used on suspects. Terrorist attacks aren't increasing because the "bad guys" suddenly got their hands on a copy of OpenSSL.
In the case of the most recent attacks, these people were let into the country voluntarily.
The prime minister, Malcolm Turnbull, is a noted user of Signal...
One day these stories will be written by and about people who have a clue. One day...
Brandis said warrants should be "sufficiently strong to require companies, if need be, to assist in response to a warrant to assist law enforcement or intelligence to decrypt a communication". A company which makes end-to-end encryption will not be able to assist law enforcement in this way unless they make a backdoor.
Conclusion: Brandis either doesn't know what a backdoor is, or he does know but realises that "backdoor" has negative connotations so he is pretending that that's not what it is. Both possibilities are pretty reprehensible in my opinion.
They think we're stupid like criminals too.
I want to hear more on this, because so far as reporting has gone on terrorist attacks since 2013... The use of encrypted messaging systems seems conspicuously absent.
However, ISIS overseas is different. They or an allied group have offensive cyber capability and an appreciation of opsec. They are known to have taken advantage of weak opposition opsec for disinformation and tactical advantage (hacking opposition command cellular devices via phishing and social engineering to get tactical planning information). I don't know if they use good encrypted comms, but it seems likely.
Would these skills migrate back to be use by local wanna be terrorists? I doubt it.
So the bigger problem is not deliberate use but accidental. If they were all using imessage by default it is going to be much harder. No easy meta data, no mass scanning of SMS. You're left with physical surveillance, phone calls, rough cell location data and HUMINT. If you can't get their Facebook messenger calls or messages you are stumped.
Of course, this is as intended -- no effective mass surveillance. But how do we enable supervised targeted electronic surveillance without it getting out of control?
If fb/google gave into to a CALEA type enforcement regime there are no limits on how much government surveillance would occur, at a level the Stasi would drool over.
They know what they're doing, and sooner or later they'll go for a "more equal than others" approach to encryption.
end to end encryption means only the end points (users) have the data.
the afp can't ask the fbi to ask facebook to ask whatsapp to hand over the content of your messages if whatsapp don't have the content.
What the proposal seems to concentrate is endpoints, where plaintext inevitably exists, and legal protocols for accessing it.
OTOH any sane implementation would only generate plaintext for display purposes, and would clear the RAM as soon as display (or input) is done, so finding the plaintext anywhere may be honestly impossible. At least, without tampering with the software on either end.
If the legal framework is laid out, government can tell Google or Apple (or phone vendor) to push a system-level update. It is trivial for both to push code that can run without any restrictions, have full access to screen, audio, camera and network.
Not sure why they bother, though. Wasn't it said many times almost every baseband module is already a black box with possibility of undetectable access to the main CPU/memory?
This is a famous interview he gave which is shows how little he understands about the concept of metadata, and is mandatory viewing for all who are not familiar with him:
https://www.youtube.com/watch?v=Hw1ryLGs2ws
His utter inability to understand the issues that he is legislating is distrubing.
"What people are viewing on the internet is not going to get caught ... What people are viewing on the internet while they surf is not going to get caught. What will get caught is the web address".
The legislation ended up retaining the IP address that you visit, but not the host or URL. I suspect this is the distinction he was trying to make, but nevertheless, it is still disturbing.
Is it even possible to solve both these problems at once in a way which preserves the freedom of the net and doesn't involve some crippling PRC style regulation?
https://www.theguardian.com/technology/2017/jan/13/whatsapp-...
Most in the crypto community seem to have sided with WhatsApp at the time, but I wonder if they were taken for fools, too, by buying WhatsApp's argument.
If I were to implement a backdoor, then implementing it as a "feature" that "makes sense" is definitely the way I'd go, especially if my app were to get a lot of attention. That way I won't have to hide it (much) or worry about it getting discovered because I could just "explain away" the critiques.
For example with PGP/GPG, if some "magical" approach was added so messages could be intercepted and then decrypted and read by intelligence/law-enforcement/(etc), it seems feasible those same people may be able to spoof the sender's signature.
eg create falsely signed, encrypted messages that verify as being from the real sender. Extremely good for blackmail/framing/similar. :(
It would depend upon the capabilities of the "magical" implementation approach of course, but it fits the scenario. PGP/GPG is regarded as pretty strong, but SSL/TLS certs already aren't so seem like they'd be much more prone to this.
Wikipedia says the United States Capitol/The White House was called "The Faculty of Law". The Pentagon was dubbed "The Faculty of Fine Arts". Atta codenamed the World Trade Center "The Faculty of Town Planning". I remember reading they had also use terms such as "birthday cake" and "candles".
I don't know if ASIO (and the US agencies pushing this agenda) are lazy or if they have some different agenda. Clearly this isn't a make-or-break issue in policing.
I happen to know he uses it quite extensively.
He's got to realise that any such agreement will inevitably end up being the lowest common denominator of what each of the nations think they can reasonably get away with legislating, which in this case probably means that US (with the strongest device-maker and social-network lobby) will drive what is possible.
Worrying.