1. Make that a rule in the Play Store and ban apps that violate it
2. Make Android present convincing fake data to apps when permissions are denied
- no photos - only specific photos (the system picker will appear to select them) - all photos
That way the app still gets the permissions it asked for, but they're specifically what you want it to see.
This is actually a feature with MIUI, though I am not sure if this is part of the global release or only Xiaomi.eu, a modified version of the chinese release). https://xiaomi.eu/community/attachments/screenshot_2022-10-2...
This particular issue didn’t get addressed until at least 8 months after TechCrunch exposed the practice. Where was Google?
Control of the App Store and Play Stores should be carefully transferred to an independent organization, with an open governance model and a mission to serve consumer interests. It won’t be perfect but it would be a big step up.
If that can’t be done for whatever reason, find another way to disrupt the App Store. I struggle to think of why not doing so is a net good for society.
That reminds me, years ago I used to run a module called XPrivacy that does exactly this. It does require a rooted Android device though. I haven't used it for a long time, but seems it continues to live on as XPrivacyLua.
There is the new permissions for locations. Accurate and not so accurate.
Apps are told that "you are given coarse location" so they refuse to work.
Same for contacts. I refuse to use truecaller because it "requires" contacts access. I don't want it so I am at an impasse.
Permissions should be transparent. As you said, if the user decides on system level to disable location, apps should be told "no signal" or "no location for now, carry on"
What about apps that aren't malicious? How can they tell the difference between a user who denied the permission to reasonably offer alternatives?
GrapheneOS can do this. I believe you can even choose to make only chosen photos visible to a certain app
You've got three major systems for detecting policy violations: static analysis, dynamic analysis, and human interaction. You don't want too many false negatives or else you get bad media coverage complaining that you aren't doing enough to enforce policy. You don't want false positives or else you hurt benign users.
Static tooling will be able to detect specific kinds of ways that an app might refuse to work if you don't have a permission, but will struggle mightily in general. Dynamic analysis needs to be driven to the specific feature that triggers the behavior. Both will struggle if the app's response is something like returning to a home screen with a custom message. And good luck teaching one of these systems what "disabling unrelated functionality" looks like.
Human interaction works better but is a gazillion times more expensive. Training people is also harder than one might think. You can train humans to identify "disabling unrelated functionality" but that's fuzzy enough that there are going to be some errors. Doable, but every single new policy costs significant amounts of money.
Policy overload is also a problem for developers. There are already a lot of rules on both app stores. Developers get a new "hey this is a new rule you need to comply with" email all the time. You can only roll things out so fast or developers will get overwhelmed with just validating that their apps remain in compliance.
These are often solvable problems in isolation, but when taken as part of the overall effort of policy enforcement on app stores they become quite a bit more challenging.
I often do it when I first install an app that shouldn't need internet access.
I have wanted this for years. I eventually left Android because the permissions models were deranged (IIRC the number of apps that "need" phone access to pause something during a call). iOS isn't perfect but they seemed to be enforcing your prong #1 at least a little more than Android when I switched.
The app was advertised as a short-term loan with borrower-friendly terms ("give us a tip!") -- yeah right. Come to find out it's just a new accounts funnel. Yet this app is allowed to blatantly exist on the app stores, despite not doing anything like what it was advertised to do and tricking you into handing over all your transactions data from your checking account (probably to look at your cash flow and decide how valuable you are from a new accounts perspective).
You could offer me $1000 cash and I wouldn’t do it. It’s just not worth the hassle as setting up and establishing a new bank account is a bit of a hassle.
You may not need $1000, but some people literally do.
How do you pay for your electricity? I set up a direct debit with my utility company. That involves handing over bank details.
Loan sharks?! We reached a point when I don't even allow chat app (WhatsApp) to access my contacts. Banks' apps love contacts as well ("send money to phone number"). With "convenience" bait they get birth dates, physical addresses, emails, profile photos, and whatnot. I see from behind my keyboard how banks salivate to calculate some credit worthiness from the contacts uploaded (and confirmed by the entry in the other person's address book).
In a sea of predatory applications, why is lending the only one that gets blocked here? A whitelist would be better (say approved photo and contact apps could access photos and contacts), and better still would be the app can only access what you transfer to it and doesn't get blanket permissions.
I also agree with the other comment that this shouldn't be within Google's power to decide, it should be regulated - if you force a closed OS on users, you should be limited in what it can access
Because lending apps are the only one to engage in egregious behavior, see [1] as an example. The relevant sections are quoted below:
> If a user was late to repay, the app had previously indiscriminately texted or called contacts in the user’s phone as part of loan collection efforts. This process began immediately after a loan repayment was delayed, according to user reviews.
> Numerous users reported that friends, family, employers, and other contacts were harassed and threatened through Opera’s apps when a borrower was late.
(...)
> In another example, the apps threatened to place friends or family of a borrower on a national credit blacklist if they didn’t convince the actual borrower to pay:
[1] https://hindenburgresearch.com/opera-phantom-of-the-turnarou...
Didn't LinkedIn do something similar early on? Harvest your contacts and then email everyone trying to get them to join.
There was nothing to be done that would satiate Apple besides disabling the contacts permission, so the user experience is now worsened. It's still death by a thousand cuts when working with these app stores.
Was it being rejected for asking or for being broken if it didnt get the permissions?
Or was it simply not able to give a justifiable reason to Apple for needing the permission?
You say it was staying on device but once you have access to those contacts it would be trivial to add the ability to send them to a server or have them leak via third party tools like the facebook sdk. That would be completely invisible to the user after giving past permissions.
The fact that you say that the user experience is now worsened makes me believe that contact access was not an absolute requirement for the app to exist (like say... a contacts organizer or something) and is extra functionality.
Personally with very very few exceptions I will not grant an app access to my contacts since anyone in my contacts don't have the luxury to also consent to some company having their data.
To satisfy KYC/AML, providers of financial services on apps thus ask to see photo id and pair this with a photo taken by the app itself.
I'm not fully across the KYC loopholes, but it seems like this would make fulfilling the regulations very difficult or potentially impossible as the required identification options needed to satisfy KYC each include a headshot.
https://www.ecb.europa.eu/paym/groups/pdf/dimcg/ecb.dimcg210...
The backwards compatibility of Android is a problem in this regard, because apps targeting old versions of Android get old, often less private, behaviour from the system to keep them working. Google has been forcing developers to upgrade their targeted version for a while now, though, so any app that still receives updates should be forced to use the modern API.
In the end, there will always be apps that need full media access. File managers, galleries image collage tools, you name it, you can't completely disable the generic file API. All other apps can use more appropriate APIs and often do, but those that hoover up data have little incentive to use the modern, privacy friendly versions. They're dragging every well-meaning app down with them through their terrible business practices.
I fully blame the advertiser laden crapware for the fact I can't sync my phone's clipboard in the background through KDE Connect anymore. The fact Google restricted the APIs instead of kicking the borderline malware out of their store irks me to no end and the fact Apple has placed similar restrictions onto their platform tells me it's not just Android.
- Apps can refuse to work with that, like Google Photos (it used to work during the beta and it was perfect for me)
- Apps still offer their awful photo picker on top of your already-picked photos, so selecting new ones requires a lot of taps.
I wish Apple would reign in some of these apps. In-app browsers and custom photo pickers should be banned unless they have demonstrated advantages.
It’s a hook for the system’s built-in image picker sheet — as such, it allows the user to browse their entire library, however the the app only gets (one-time) access to the individual piece of content they pick. Nice thing is that the app doesn’t need to ask any photo permissions at all (as far as read access is concerned).
With some exceptions like Messages, which presents a custom picker UI, this API gets dog-fooded by almost all Apple’s stock apps (Safari, Notes, Mail, the “iWork” office suite etc…).
An example of a 3rd party app implementation is MaskerAid by Casey Liss [1]. However, the amount of apps I’ve encountered that use this interface is suspiciously low.
The realistic answer is probably that the sheet looks pretty barebones, and most developers seem to prefer a sleeker, custom-designed integrated gallery view, and/or need write access.
But the paranoid part of me raises the question: why do so many apps insist on continuous access to at least a portion, but preferably the entirety of the user’s photo library?
0 – https://developer.apple.com/documentation/uikit/uiimagepicke...
But like many things with iOS Apple did this and apps had no choice but to work with it since (seemingly) as far as the app is concerned it is the same situation as before.
I do wish though it was easier to grant more images without needing to go to settings. I have had one app that somehow gave me the ability to add more images, but I am not entirely sure how it did it.
This feature is actually quite foundational to the Android architecture, where the vision was a bunch of small apps working together in this manner.
Unfortunately it's a slightly more clunky user experience than what users these days have gotten used to: big monolithic apps that handle everything themselves.
In any case, there is never a legitimate need to know the entire address book to "send money to your contacts": mobile OSes could just offer an interface to manually pick a single contact and return it to the app, which could then validate it as a financial partner
The phone in my pocket isn't mine, I paid for it, but it belongs to Google, and they make changes to it all the time without my permission and without giving any indication to me that something was changed on my device. Google prevents me from being able to see what the apps on it are doing, and prevents me from changing how they run, or from monitoring all in/outbound communication.
Google's shitty permissions system is such a big deal for mobile because it's literally all we have "protecting" us, and that isn't much. Naturally that leaves us with zero protection from Google itself. but that's the price we pay for having a mobile device that gives us more freedom than Apple ever would.
Sandboxing has existed for ages and recently a lot of effort is being invested into making it mainstream on desktop Linux.
I believe most modern operating systems will not just grant blanket permissions to every application, except maybe single user systems like BeOS.
Maybe Android has this, but on iOS I can go into privacy and easily see what apps have access to what data (and easily revoke that permission).
But I don't see any kinds of metrics that would indicate that an app is possibly abusing that permission.
For example, it would be awesome if I could go look at photos or contacts and see a percent for how much that app has accessed that data and maybe even a graph overtime so I can see if it was a one time thing or its mining for data.
There is the app privacy report on iOS that gives me some of this data, but it doesn't give me how much data it is accessing. Which I think is the critical part.
If I give an app access to my photos I expect its going to access it, but without knowing what its doing its not quite as useful. Still useful, but not as useful.
I am talking after you have the app installed to actually see what it is doing. Specifically what it is doing.
On iOS I can see that an app is accessing photos and I can see when, but I can't see what or how much.
The feature you mentioned is similar to the labels that iOS has. It even says that in the header.
pay day loan mountain view
are labeled “sponsored” and look sketchy to me.
This might be the first time anyone has ever Googled that.
That means more choice, but can also weaken the protections for users. Alternative stores will likely have more loose policies for what apps/behavior they accept.