https://puri.sm/products/librem-5/
They're making good progress and I can't wait to be able to update my handheld device with mainline pieces for as long as anyone who still uses one cares to update it. Currently my Samsung Android device is at Dec 2018 patchlevel and nothing I can do about it.
AOSP is completely open source. Hardware and firmware is a much different story, but that applies to the device you're promoting just as much...
> They're making good progress and I can't wait to be able to update my handheld device with mainline pieces for as long as anyone who still uses one cares to update it. Currently my Samsung Android device is at Dec 2018 patchlevel and nothing I can do about it.
What's the relevance?
It's also quite important to note that the Android patch level includes firmware. Purism doesn'tship firmware updates in PureOS as part of it being 'pure', so you would be stuck with the equivalent of an ancient patch level at least with the stock OS. You're also no less dependent on the companies releasing firmware updates.
You're also bringing up hardware as an alternative to an OS that would run on the hardware that you're talking about. It's hard to understand the point. The Librem 5 will be a hardware target for GrapheneOS to consider. It will be missing many of the core hardware security and robustness features, so it couldn't be a tier 1 target, but it could still be unofficially or even officially supported.
If it doesn't depend on any out-of-tree kernel drivers, that will apply to Android and GrapheneOS too. I'm not sure why you're bringing it up as something distinct.
This is only true in the most technical way possible. Yes, AOSP is open source -- but none of the standard applications on any stock version of Android use AOSP anymore. The calendar and other applications are all proprietary. The AOSP versions feel like they stopped being developed in 2010 -- which coincidentally is when Google started developing proprietary replacements.
I use LineageOS (and have for a while), which is mostly AOSP, and the applications from AOSP today feel older than the ones I used on Google's Android ~5 years ago. As a simple example, Google's Calendar application can create very complicated recurring events while the AOSP one is much dumber.
> Hardware and firmware is a much different story, but that applies to the device you're promoting just as much...
The Librem 5 hardware was specifically chosen so that it contains no firmware blobs and all the firmware is free software and upstream in Linux. There is a caveat for the baseband, but that's because it's not legal in most countries to sell or use baseband hardware that is free software (unless the user is licensed and even then it's non-trivial).
https://android-developers.googleblog.com/2019/05/queue-hard...
As such I have a very hard time believing that Librem with be as secure as modern Android.
Eventually you're running something big with bug after bug found every month and and an attack surface that includes the local filesystem and the network. At that point the buzzwords make no difference.
This is only true initially, presumably due to time and funding constraints. From the FAQ (https://puri.sm/faq/):
> What are your plans for tamper-proofing the Librem 5?
> We hope to have a version of PureBoot available for the Librem 5 for users who want to verify it with a Librem Key. We cannot commit to it being available at launch but it’s a goal.
A PureBoot description can be found at (https://puri.sm/posts/pureboot-the-high-security-boot-proces...).
Still, even on Linux, you can set up SELinux or Apparmor to harden your system as much as possible, run untrusted applications as a different user, compile your own hardened kernel, and so on. It's going to be a less secure system for casual users, but it'll allow power-users to more easily (you can do that on Android as well, but it's more difficult) secure their system as much as they want.
Librem is going the right way, and there are a handful of other companies working along the same path. Necunos is another I heard of as well: https://necunos.com/community/
Just dump all the proprietary Google add-ons and enjoy the F-Droid app store. You will have amazing battery life, less detractions, less ads and a lot more security and privacy.
I enjoyed this with CopperheadOS (the GrapheneOS predecessor) on a Nexus 5X until the project folded. Google stopped supporting the Nexus 5X with updates a few months later.
Have you checked whether there's a LineageOS build for your device? https://wiki.lineageos.org/devices/#samsung (Darker links indicate a build is maintained and available.)
My Samsung galaxy S6 (march 2018 patch level) wasn't supported the last few times I checked, but older galaxy models were.
I could carry a second device for personal use, but am unlikely to.
- Decent hardware available at competitive price
-- While I could make do with some degraded performance for a truly open phone concept, most people would not, especially if price point is similar to, or higher, than established closed platform brands
- Must have apps available - needed for wide acceptance
-- My personal examples of must have apps:
-- BankID (Swedish e-id, needed for banks, taxes, government sites, payments)
-- Swish - Swedish app for personal micro transactions
-- Public transportation apps (tickets/timetable)
-- Bank application
-- Signal
Without these apps, a open platform phone would be next to useless to me. And I am a big proponent of open platforms.
And looking at how reluctant BankID were to even support older version android phones, I am not optimistic to them adding a completely new platform to support.
I know people who were forced to upgrade from "old" phones because BankID no longer supported their Android version, and phones would not get newer Android version.
It's good that the tech person is moving on, but Android doesn't seem a great starting point if privacy&security are the top priorities (as opposed to remaining captive in the Android camp, with some belief that you're a bit more secure than default).
Having a massive monolithic kernel at the core of the operating system written entirely in a memory unsafe language is obviously a huge problem, and will need to be addressed over the long term. It's an increasingly blatant weakness, and the enormous amount of ongoing work that has gone into userspace doesn't translate well to the kernel. Developing increasingly more sophisticated mitigations helps a bit, but it can't solve the fundamental issues with the choice of language, architecture or development process. Linux ultimately isn't a viable choice for creating a system with decent security. However, Linux compatibility is part of Android compatibility and is essential. That means the Linux kernel either has to be kept around in virtual machines or replaced with a compatibility layer on top of a microkernel. https://github.com/google/gvisor is an existing project which could be ported to arm64, expanded as needed and adjusted to run on top of another kernel but it doesn't need to be the starting point. It's usually a good idea to start from an existing base like this and try to land everything needed upstream though, rather than burning far more resources starting from scratch and losing out on the shared benefits from collaboration with a larger community.
Using virtualization is a nearer term goal, with a compatibility layer as a much longer term aspiration. There's not much written about the roadmap on the linked page, but this stuff is actually mentioned, and I'd recommend checking it out before wrongly assuming that the goal is simply having a hardened fork of AOSP. There has already been substantial work on experimenting with integrating virtualization for app containment, although containing user profiles would be another approach and potentially more useful.
https://web.archive.org/web/20111130031013/http://www.ok-lab...
CompSci folks have been doing it, too. Here's a paper describing the design style:
https://os.inf.tu-dresden.de/papers_ps/nizza.pdf
Genode OS Framework is only one I know building something like this in FOSS. The rest, esp used in phones, were commercial. One might port Android to something like it or seL4 with dynamic, resource management. Rewrite drivers or anything that's moved to kernel mode for performance in safe language or lots of verification tooling thrown at it.
Stop spreading FUD. In this conversation we are talking about phones filled with bloatware that spies on the user in every instant and you nitpick about memory safety in the kernel.
I had summarized public information and questions from as the implosion was happening, trying to be impartial and not voice my guesses as to what actually went on, and concluded that everyone should be given the benefit of the doubt, given all the unknowns.
But that impartiality could be upsetting to one of the parties (by bringing up things that, say, a good guy maybe can't talk about), and maybe this isn't the time, as people are recovering and moving forward.
At the same time, HN people might need to know some lessons from this noteworthy incident in development and privacy&security, since it should inform how we think about how other projects and startups can fail.
For example: a prominent security product can suddenly be compromised (e.g., in the sense of trusted security update mechanism broken, or a change in who controls updates), or a business partner can force out you and your intentions/stewardship of the product that you think depends on you being in the loop to not be evil, or a different business partner can delete very important keys, or (vaguely) this is another way it can be difficult to reconcile privacy&security goals with business ones.
These failures might be things we briefly consider as hypotheticals when planning or evaluating, but they do happen, in real life, on prominent efforts.
What happened with CopperheadOS was unfortunate [2]. I hope Daniel [3] is able to work on GrapheneOS on his own terms [4]. The work that was done had garnered a lot of following and there's hope, given his exploits in the past, that he'd be able to steer this non-profit to heights where industry leaders building SilentCircle [5] and CyanogenMod failed.
Sure will be following the project from afar and routing for its success.
Good luck Daniel.
[0] https://www.reddit.com/r/CopperheadOS/comments/8qdnn3/goodby...
[1] https://news.ycombinator.com/item?id=9551937#9552769
[2] https://news.ycombinator.com/item?id=17289536
[3] https://news.ycombinator.com/user?id=strcat
It's probably not wise to post this, since I never talk about it publicly, he might not like it, and this thread should be about GrapheneOS, not Rust or myself. But with the mention of his Rust involvement and the history of CopperheadOS, I feel compelled at the moment to add some context and give him props.
He is an exceptionally skilled developer.
Some of his contributions to Rust were crucial. Of particular note, he re-designed Rust iterators to their current form. But he added to Rust so much more, both with his ideas and code.
And from what I recall he was in high school at the time. Amazing.
I happened to be the Rust team lead during much of his time contributing to the project, and a good deal of the blame for his departure belongs to me. It was a difficult learning experience for everyone.
It is totally fair to say that Rust would not be what it is today, both technically and socially, without him.
I was happy to see him rebound with CopperheadOS, and again here with GrapheneOS.
Good luck, Daniel.
(edited to remove some Rust cheerleading)
I wanted him to stay a little more around the Rust 1.0 era because I thought as many warts as possible should have been fixed before the backward compatibility guarantee was made, but he left too early.
Didn't know about the CopperheadOS incident, too bad such a situation happened to him. Good to see it's kinda resolved now.
It seemed like he got royally screwed just given the amount of time and effort he put into the project, easily shown just from his activity and responses during its life cycle.
I find it interesting that people bring up my time contributing to Rust as a negative thing, largely due to my former business partner misrepresenting it and falsely claiming I was kicked out of the project. To be clear, I don't think you're doing it maliciously, but it's quite weird that contributing so much of my time to an open source project and then having that used against me as if working on a set of open source projects nearly full time for a year as a volunteer was a terrible thing to do. It's a large part of why I now avoid doing work without compensation. If people are going to value my work so little, then I'm at least going to get paid for it.
I left Rust on my own accord because I wasn't enjoying it anymore and I'd determined that it was extremely unlikely that it would turn into a career which is part of why I'd persisted long past the point that I was enjoying it. It became harder and harder to accomplish anything of substantial value as it moved towards stability. I had strong opinions on many of the topics and to get anything done as an outsider I had to make strong arguments and be incredibly persistent, which rubbed some people the wrong way. It was also very rough at that time being an outsider and trying to have a significant influence on it, especially when I disagreed in many areas with the core developers. The way things were done drastically changed later on for the better. I left the project for the same reason a few people didn't like my involvement in it. I was getting burned out dealing with them and they were getting burned out dealing with me. It certainly went both ways and the vast majority of the people involved in the project didn't have issues with me. Out of thousands of people, there were only a couple that I truly didn't get along with and literally only one person where that persists today (and believe me, I'm not the only person who doesn't click with them).
I've certainly evolved how I communicate with people online since then. I still take serious issue with people bending the truth and being dishonest / misleading, which can make arguments very heated if people aren't trying to debate based on the facts. There's a tiny minority of people that I absolutely don't get along with because they'll keep bending the truth and I'll keep pointing out that they're doing it, which they can interpret as an insult. In the context of a debate over the design of a project where the stakes are high, I'll choose not to be very diplomatic when the alternative is letting someone walk all over me with false claims. It's too tiring refuting things over and over and having facts treated as subjective things rather than being able to agree upon a set of facts and argue things based on their merits. I think the world would be a better place if people didn't tolerate this so much. I was no good at playing politics and choosing my battles carefully which played a big part in it too.
The objective truth is that I decided to leave the Rust project and community, and I removed myself as a contributor from the repository. If I recall correctly, I think someone misinterpreted what happened and posted a thread on /r/rust incredibly angry because they thought I was kicked out of the project. The people who saw the thread but weren't aware of the details assumed that it actually happened and then had a massive fight with each other about whether something that didn't happen was justified or not. The reality is that it didn't happen in the first place.
I also seriously doubt that I would be kicked out of any project for occasionally being a bit abrasive in arguments. It would be a bit ridiculous for a project to ban people from contributing for having that kind of personality or not being neurotypical. It's possible that they would have asked me to start being less abrasive in debates, sure, but they hadn't. I definitely don't think I was always fun to work with, particularly once things had soured with Mozilla, but I don't think it's entirely fair to put all the blame on me for that. I was upset about what had happened overall and that definitely influenced how I participated.
My experiences with Rust and other projects are what led to me making sure that I'd own and control the projects that I'd be heavily working on in the future so I wouldn't need to spend so much of my time debating and playing politics. When I co-founded Copperhead, I made sure that it was explicitly agreed that my open source work would remain under my control despite the company sponsoring it. It was explicit that I would own and control the OS development project. It's worth noting that there were 3 co-founders, and 2 of us believed in open source and the company building value around it rather than by selling it. Unfortunately, the 3rd co-founder left early on before shares were even divided up, and I ended up owning the company 50/50 with a narcissistic sociopath who ended up totally screwing me over. Internally, there was conflict and dysfunction long before it became public. I wanted to be free of that company for a long time, but I couldn't leave because I couldn't just abandon the people using the project and it had become too tied to the company. Eventually, my business partner decided to throw away all the agreements and just try to take over the project with threats / ultimatums. I don't think it was at all rational for him to do that. It wasn't at all in his best interest even from an entirely selfish point of view. There's absolutely no way I was going to turn over ownership / control of my project to someone that by then I considered highly untrustworthy and downright dangerous. Unfortunately, they had set up everything to be able to completely screw with over by tricking me at various points and being very strategic about how the domain, infrastructure, etc. was set up. It ended up not mattering at all that I owned 50% of the shares because they just ignored my rights as a shareholder and banked on me not wanting to spend a huge amount of money fighting them in court.
GrapheneOS is the direct continuation of my work on this, which began before Copperhead became involved in it. It had existed before it was CopperheadOS. I've learned a lot of lessons from the experience there. One of the biggest mistakes was being tricked into not being a director early on, but that was also before the stakes had become so high. It also really shouldn't have mattered to the extent that it did if the Copperhead lawyer had been at all competent and truly looked after the interest of the company instead of acting solely on behalf of my business partner. Anyway, I'd rather not be directly involved in businesses at all. I've had almost nothing but bad experiences with governments, businesses, etc. including things far worse than the stuff with Copperhead.
And there's also a useful incremental path for evolution, that starts with a familiar GNU/Linux and moves gradually towards handheld tweaks (UI, power, devices, apps).
It's important to be upfront that PostmarketOS is not yet viable as a daily driver, or people will feel they wasted their time looking at it. What it really needs is programmers, like the earlier Linux ones, who will power through the pain of getting things working well, and stick with it for months, as a labor of love.
> Details on the roadmap of the project will be posted on the site in the near future. In the long term, it aims to move beyond a hardened fork of the Android Open Source Project. Achieving the goals requires moving away from relying the Linux kernel as the core of the OS and foundation of the security model. It needs to move towards a microkernel-based model with a Linux compatibility layer, with many stepping stones leading towards that goal including adopting virtualization-based isolation.
Essentially, the goal for the project is for it to be an OS compatible with Android apps, using the Android Open Source Project software stack to run them, but the underlying base can become whatever is most suited to the task. For now, the most practical approach is using virtualization to reinforce the app sandbox and user profiles. Eventually, the virtual machines can drop having their own Linux kernels (see gVisor as an example of this). In the very long term, the Linux kernel at the core of the OS could eventually go away too.
I'd recommend checking out the standalone projects like https://github.com/GrapheneOS/hardened_malloc and https://github.com/GrapheneOS/Auditor for an idea of what the project is focused on doing. The hardened_malloc implementation supports other operating systems, as does Auditor, which supports verifying the stock OS on many mobile devices (they need to be added one-by-one to the internal database based on users submitting attestation samples with the app) and CalyxOS in addition to GrapheneOS.
The OS project itself is still in the early stage of reviving it, porting over past work and getting the basics done. It's very focused on the infrastructure and low-level work right now. Working on the higher level features that are more user facing and bundling various apps, etc. is not a priority right now. It doesn't even bundle F-Droid right now, because it's not quite at the point where bundling any third party apps makes sense. It also needs to be determined how to best approach that. A lot of these things will also be done in collaboration with other projects like CalyxOS, with GrapheneOS focusing more on low-level security hardening. CalyxOS is primarily working on areas like the backup service implementation and various other higher-level services, and a lot of this will be used by GrapheneOS too.
A micro-kernel, with a linux virtualization layer, being abble to run Linux executables as if they were native.
My hunch is that in the long term, Google will probably use Zircon as the 'first-level' kernel, and run the android apps using some emulation layer.
Maybe it could be the answer for what you are trying to accomplish, without having to create a whole micro-kernel OS from scratch, while at the same time benefiting from whats already there.
I bet that with such a thing in place, the Linux kernel could totally go away, with only a emulation layer in place if you want to.
So in order to get that is more secure and more independent from Google I have to buy a Google phone?
A PC with UEFI (except for a few of those which Microsoft locked down) lets you turnoff secure boot, and install your own keys, and turn it back on. So you actively delete the stock keys that boot stock Microsoft/Ubuntu/Redhat, and then custom sign your Grub bootloader or UEFI-Stub Kernel, add that cert to SecureBoot and turn it back on.
You can argue device security all day long, but if manufactures can't update Android security patch sets as they come out, then you have gaps in your device security anyway.
Google controls ASOP. They could literally force manufactures to be compliant, have UEFI or devicetree as a standard, demand every device allow a stock reinstall just like Windows and even create shims to fix the broken Linux driver ABI. But there is more money in planned obsolescence. Gotta throw out that phone after two years and just buy a new one.
Yes, people find it ironic, but that's how it is. Google's hardware is always better than competition for security: not just in phones, but also look at chromebooks. However, not running proprietary google software is left as an exercise for the reader.
See https://grapheneos.org/#early-stage-of-development and https://grapheneos.org/#device-support. There's barely any content on the site, since it's so new, but this is covered pretty well. It does support other devices already. There's a difference between that and deciding to do all the work to provide official releases with seamless over-the-air updates covering all firmware, etc. along with porting all device-specific hardening work.
> So in order to get that is more secure and more independent from Google I have to buy a Google phone?
The goal is primarily implementing privacy and security improvements. It doesn't include Google services for privacy reasons, but that's not the purpose of the project. A project aiming to project AOSP with the baseline privacy/security intact and work to fill in gaps left by not having Play Services would be useful, but that's a tiny subset of what GrapheneOS is about. It's primarily about the privacy/security research and development work.
You can see that the GrapheneOS Auditor project supports a large range of devices already:
https://attestation.app/about#device-support
That's because it's quite easy to add support for each device one-by-one once users submit attestation samples with the app. The main list is for devices with the stock OS. It also supports CalyxOS and GrapheneOS on all their supported devices and will happily include other operating systems with verified boot and the security model intact. There are now a bunch of devices supporting verified boot with alternate operating systems.
The obvious way to get a phone not running T-Mobile spyware is to not buy a phone from T-Mobile.
Not trying to be snarky here, I have one of these phones too. Though if you happen to have a T-Mobile Oneplus phone like I do it is possible to flash the international ROM and replace the T-Mobile spyware with Chinese spyware.
You can buy a contactless smartcard for $15 each and install this on it https://github.com/tsenger/CCU2F
Android can give you privacy and enough security for most people. This can't add much more as long as its running on the same devices.
This is a great effort and I support it, but let's not imagine this will make our phones that much more secure.
Components with DMA can be contained by IOMMU, and that's the industry standard today. However, you seem to be implying that backdoors are being inserted into non-CPU SoC components, and it's very hard to understand the threat model you're applying to this. Why would there even be a backdoor inserted into an SoC component like the image processor, which is contained by the IOMMU, rather than the CPU? These SoC components aren't third party components. They're on the same die as the CPU and come with it. That doesn't mean they can freely access all memory... but it does mean that supply chain attacks targeting them would generally be able to target the CPU instead.
If a hardware component is compromised, an attacker would target the driver and gain code execution in the Linux kernel via an exploit. The Linux kernel is a weak target (monolithic - no internal security boundaries, fully written in a memory unsafe language) and drivers are rarely well hardened against attacks from hardware since developers have a tendency to trust it and to not apply an adversarial model towards it as they do with userspace. They don't need unrestricted DMA access, and proper IOMMU setup keeps them from having that. Having DMA does not mean having full control over all memory. Not having DMA doesn't mean that the component is well isolated. Whether or not the component is on the same die is totally orthogonal to whether it has DMA access. These are common misconceptions, and are being abused by dishonest marketing to trick people.
> Android can give you privacy and enough security for most people.
Some of that is due to the improvements landed upstream based on the work in this project.
> This can't add much more as long as its running on the same devices.
I don't agree with that at all. It can't improve the security of firmware directly, but it can certainly improve the isolation of it by auditing and improving IOMMU configuration along with hardening the drivers. It also won't be supporting devices without decent IOMMU support and firmware security updates. The project has also reported various firmware security issues to the relevant companies over the years of the project, so that's an indirect way of improving them.
A large portion of the project will also be on app layer projects like https://github.com/GrapheneOS/Auditor usable on the stock OS and other operating systems. Auditor / AttestationServer support the stock OS on a bunch of devices, along with CalyxOS and GrapheneOS. Other apps will generally be more portable, but in this case it has to have a database of the verified boot key fingerprints and other device properties. The verified boot key is the only information included in the signed hardware attestation data that it can use to distinguish between devices which it needs to do in order to show the device model and apply different checks based on the device. That's why devices need to be added to Auditor one-by-one based on users submitting sample attestations with the app.
I don't disagree with your overall post. I do want to add that there's a good reason to not put the backdoor in the CPU: it's main place they'll look with plenty of people capable of spotting it. The guy that taught me about hardware subversion years ago preferred hiding stuff in analog parts of mixed-signal ASIC's. He said digital people neither saw it nor understood it. He and others taught me about how the two can interact in invisible ways where analog or RF portions might pick up leaks. So, deniability is maximum if it's some kind of analog or RF part of a chip. He claimed to have never found backdoors but that he and others used this for I.P. obfuscation a lot.
I do like the IOMMU and firmware work. There's a lot of custom I.P. to build before being competitive with one of high-end SOC's. One thing I considered about trying to make an open phone is whether a company with money could just pay for Snapdragon to be integrated with the RISC-V cores. Modify RISC-V core to use microcode for security updates and product enhancements. Put security barriers in key places so Snapdragon I.P. is a little less dangerous or can even be powered off component by component. Then, if the agreement gets more data on hardware, use that with secure, development practices to make robust drivers. Include method for secure boot and update that still allows user to put their own stuff on the phone if they choose.
What you think?
EDIT: In case it wasn't clear, I know there's stuff like IOMMU's in Snapdragon. I'd just prefer an independent, security-focused company to be making those components. Sort of a check against incompetence or malice on Snapdragon's end.
This is misleading.
> A maliciously inserted backdoor designed to be stealthy would be indistinguishable from those
This is a false equivalence. In most closed source systems the vendor does not need to put efforts into designing a stealthy backdoor.
It just adds tons of code that spy on the user and call it features. The amount of homecalling done by android, ios and windows is staggering.
Not to mention the ability to push an update that can contain a backdoor on a specific, targeted device without the users being aware of it.
This would work in China though, since they don't have the Play Store in the first place.
Lastly, I wonder how this will do over time considering Fuschia.
GrapheneOS includes sub-projects including standalone projects like https://github.com/GrapheneOS/Auditor and https://github.com/GrapheneOS/hardened_malloc that are portable to other operating systems. This also applies to a lot of work that's under active development and not yet published as part of the stable releases.
It intentionally doesn't stick to the Compatibility Definition Document / Compatibility Test Suite requirements required to be Android, so it can't be referred to as Android, but rather it's an OS with Android app compatibility. It preserves what's actually needed for compatibility in practice, while not being strictly bound by those requirements. The intentional deviations from these are documented, and there are a bunch of them.
> Lastly, I wonder how this will do over time considering Fuschia.
If it ends up shipping as a replacement for the core OS, with Android running in a virtual machine or on top of a compatibility layer like gVisor, that would just mean that there's a better base to build on than before. All of the work done by the project would still be relevant in a future like that. I'm not so sure that's truly going to happen though.
Fair enough though.
>If it ends up shipping as a replacement for the core OS, with Android running in a virtual machine or on top of a compatibility layer like gVisor, that would just mean that there's a better base to build on than before. All of the work done by the project would still be relevant in a future like that. I'm not so sure that's truly going to happen though.
I see.
Microg is a (very welcome!) band-aid for the fact that the android ecosystem is critically dependent on a proprietary piece of software called play services.
PMO is a different, libre, OS and ecosystem that doesn't have that problem to begin with since it is truly a Linux (as opposed to AOSP) and truly free (as opposed to android's "can read most code but Google holds all the cards")
The whole point behind it is working on a new mobile OS, while providing Android app compatibility by using the Android Open Source Project. As the page states, the long-term goal is to turn AOSP into an application layer while moving away from entirely depending on Linux for low-level security since it's a huge liability / weakness. It already isn't 'Android' since it makes changes deviating from what's required to be Android (the Compatibility Definition Document and Compatibility Test Suite). The goal is practical compatibility with Android applications rather than conforming to what's requiring to be Android.
> All I can say is good luck with various firmware, custom services and drivers.
Hardware, firmware and drivers aren't an OS specific issue beyond drivers being tied to Linux. There's barely any content on the site yet, but it does cover how important it's going to be to make careful choices about which hardware to support in the device support section. It talks about how much of the privacy and security is tied to hardware capabilities and security support.
Seriously though, I hear Apple is super great on privacy now as long as you take their word for it and not their track record of being a member of PRISM https://en.wikipedia.org/wiki/PRISM_%28surveillance_program%... , etc
There were consequences to the US passing draconian surveillance laws like the Patriot Act and the updates to it. Companies / organizations and individuals based in the US are subject to those laws. Many other countries have similar or even more oppressive laws when it comes to these things.
A company being based in for example France doesn't mean that the same kind of things don't apply.
If you don't want data to be subject to warrants requesting the information from companies, you need to either avoid having your data there or make sure it's encrypted with a key that the company doesn't have. End-to-end encryption is important. The companies could also have the data exposed by a malicious insider or a data breach. Many countries will happily demand the key from you and then treat you as a criminal for not turning it over but that doesn't work for mass surveillance.
https://ewwlo.xyz/evil https://infosec-handbook.eu/blog/e-foundation-first-look/