It is still on the HN new feed, if folks want to discuss technical details, but the reason I want to mention it here too is that key management is very hard in a case of resisting surveillance. The current PKI ideas place too much trust in third party certificate authorities (meaning the government can easily pull off man in the middle attacks with the help of network providers if they want, even without your keys), and because each negotiation occurs without context of past ones, there is no way to detect such behavior other than "the CA said watch out" or "this certificate isn't even plausible." Of course you can solve these by enforcing that everyone on your network uses th same local CA that you control but that breaks as soon as you want to talk to someone outside.
Building a PKI that can resist such efforts is not trivial and it involves challenging our assumptions. Until we do so however, we will run into all kinds of problem. I may be being paranoid, but it seems like this is a good time to be paranoid.
One of the things that SSH gets right is that it takes a diachronic approach to key validation. We should be building this in everywhere and alerting on key changes, while providing a way to ensure that keys can be safely and securely changed without having errors.
Lazy link: https://news.ycombinator.com/item?id=5844621
oh man, that hurt... if I only knew a valid point against the "I've got nothing to hide" argument as I do now...
Got a handy link to that or similar arguments?
"In some countries people can be persecuted for their religious beliefs, politics or sexuality. Their text messages could potentially reveal all of the above to the government and/or someone listening in on GSM traffic [1]. Those people could use an SMS encryption app to provide them with plausible deniability or, should their local law doesn't have that notion, at least help them avoid automatic keyword detection."
[1] See, e.g., http://www.ti.bfh.ch/uploads/media/19_Vadym_Uvin.pdf.
Why Privacy Matters Even if You Have 'Nothing to Hide'
http://chronicle.com/article/Why-Privacy-Matters-Even-if/127...
The problem seems to be, to me it's so obvious that it matters even if you have "nothing to hide", that whenever someone confronts me with that argument I'm a bit dumbfounded :)
Major iOS SDK Limitation: Websites using HTML5 <video> tags will leak <video>-related DNS queries and data transfer outside of Tor. This includes YouTube, Vimeo, and any website using iOS-compatible HTML5 video. This is a behavior of the embedded QuickTime player and there is currently no known workaround. (h/t to josyw.)
iOS SDK Limitation: Javascript cannot be disabled in the `UIWebView`, so script-based detection may identify your device even if User-Agent Spoofing is enabled. iOS SDK Limitation: Related to above, the HTML5 Geolocation API cannot be disabled. The browser will ask you for permission to access your location if a website asks for it via the HTML5 Geolocation API. If you allow this, then said website will (obviously) know your actual current location.
That doesn't sound remotely safe to me.
However, we are willing to distribute our apps outside of the Play Store, but we need the following things first:
* A built in crash reporting solution with a web interface that allows us to visualize crashes and sort by app version, device type, etc. This is essential for producing stable software.
* A built in statistics gathering solution with a web interface that allows us to visualize aggregate numbers on device type, android version, and carriers for our users. This has been crucial in shaping support and development direction.
* A built in auto-update solution. Fully automatic upgrades won't be possible outside of Play Store, but we at least need something that will annoy the hell out of users until they upgrade. This is necessary for ensuring that new security features and bug fixes can be propagated quickly.
* A build system that allows us to easily turn these features on and off for Play and non-Play builds. Gradle should make this easier.
If you're interested in seeing Open Whisper Systems apps distributed outside of the Play Store, we'd welcome your contributions.
Crash reports and statistics are great, except if you explicitly want to NOT spy on your users.
Auto-updates are ok, but forced auto-updates take the user's autonomy away, and are only one step short of forced remote uninstalls (which are already documented with Google Play, so far only for malware).
A proper build system is great indeed, but has nothing to do with the distribution medium.
(x) f-droid security: by having f-droid build all the apps from source by default, and signing them with their own keys, two problems appear:
a) you can not switch easily between f-droid builds and maintainer builds
b) you as the user need to trust both the author and f-droid to not be evil, instead of just the author.
How do you go about verifying an arbitrary app is secure?
You have no such options with closed software.
- Tor
- Bitmessage
Bitmessage is specially interesting because it's not only encrypted and private, it actually solves the problem of spam and offers 3 kinds of messaging under the same interface: email-like, broadcast messages ala Twitter and chan boards.
https://mike.tig.as/onionbrowser
Silent Circle as well, closed source or not, given that one of the founders is Phil Zimmerman.
You can exchange keys locally via QR code and it is available for both Android and iOS.
You can still sync your phonebook in whatsapp style. It's a good compromise between being secure and being pragmatic to me.
However, it's not open source, so you still have to trust the app (and the OS) not to send out your private key.
Or that it actually implements encryption correctly.
Closed source is really not an option. Let's get it right this time.
https://github.com/WhisperSystems/TextSecure-iOS/commits/mas...
These are for paranoids, paranoids may not be able to read the source code and verify such things, but the fact the at least someone can certainly helps.