> Still, mainstream developers make decisions based on data. So if we want them to serve our community, shouldn't we help them by giving them data on how we use their sites and apps?
Quite possibly. But there is an argument for that being voluntary rather than mandatory, even if that decision is provided via a switch in OS-or-screen-reader-level settings. But then there are questions such as:
- Should such a setting be switched on by default?
- What sort of users are likely to enable/disable it?
- If there is any correlation between level of technical skill and privacy awareness, will results be skewed towards the users with lower levels of technical knowledge?
- If a product team uses the data to solicit feedback from users who are detected to be running an assistive technology, but quote "power users" unquote have turned off that detection, again, will that feedback be representative?
Anecdotally, I will say that after working with some iOS development teams where this behaviour is natively available, cases which rely on an explicit detection of VoiceOver still seem rare. Whereas, use of features like accessibility label overrides without an explicit check are used a lot. On the other hand, it's becoming much more common to perform explicit checks for more visually-oriented accessibility features, like reduced motion or high contrast, some of which can even be carried out on the web now.