I'm tired of feeling helpless. I want to build tools that make my life easier. Any other blind devs in the same boat who would be interested in collaborating?
To be candid, I have no idea what it feels like to be blind and have never paid much attention to accessibility other than reading a tutorial or two and making sure I use alt tags on my images. The main reason for that is that I'm lazy and based on my experience, most developers are in the same boat.
Now, if there was a service which would spin up a remote VM session inside my browser (a bit like BrowserStack or SauceLabs do) with all screen reader software setup and no screen (only audio), it'd make it a lot easier for me to experience my software as a blind user. There should probably also be a walkthrough to help new users use the screen reader and help them get started. If you're lucky, you could turn this into a business and it could indirectly help you achieve your goal of making better software for the blind by exposing more of us to your issues.
Anyways, I know you probably have more pressing issues to solve and I hope I didn't come across as arrogant, just throwing the idea out there.
The cheapest way to do it would probably be using the Orca screen reader for GNU/Linux, probably combined with the MATE desktop (forked from GNOME 2) so one doesn't have to worry about 3D acceleration in the VM, which will presumably be hosted remotely on a cloud provider somewhere. The main technical challenge that springs to mind will be capturing all keyboard events in a browser window. This is particularly important because screen readers tend to rely on esoteric keyboard commands, which repurpose keys like CapsLock and Insert as modifiers. I don't know if this can actually be done in a normal web browser.
Anyway, just throwing out my quick thoughts on this. I don't currently have time to pursue it further myself.
I work for a non-profit where we tackle accessibility issues related to the web, documents, and tech in general. We have a few Vagrant boxes that we use for development and testing, one of them is a Fedora box (GNOME 3 though) that comes with Orca configured [1] so that it doesn't prompt you for setup options. Chrome and Firefox are installed as well. If you have Vagrant and VirtualBox installed you can make use of it like so:
vagrant init inclusivedesign/fedora24 && vagrant up
The box is ~2 GB. This is the repository for the box in question:* https://github.com/idi-ops/packer-fedora
* https://atlas.hashicorp.com/inclusivedesign/boxes/fedora24
We track Fedora releases and update boxes fairly regularly so there should be a Fedora 25 one with Orca once there's an official release upstream.
I hope it can be of use to anyone here. If you have any questions we hang out in #fluid-work on Freenode.
[1] https://github.com/gpii-ops/ansible-gpii-framework/blob/mast...
Also, I feel like there was an early version/prototype of NVDA Remote that ran in the browser. I remember going to a page, turning on forms mode or whatever NVDA calls it (I've been out of the NVDA loop for a while) and I could send keys/get audio from the remote machine. I think that was before the addon was available so I'm pretty sure it was web native, but I could be misremembering. I don't think there's anything preventing transmission of the Insert key, at least. Capslock or other esoteric modifiers may be trickier.
For starters, even getting low latency out of the Linux audio stack is a major headache, and the synth situation is abysmal. You can't even touch the config files for these yourself because if you break either--even temporarily--you now can't use the computer. Then you get into how all the graphical desktops have accessibility issues to one degree or another and how you have to use a separate screen reader for anything outside them.
What you want if you want a testing VM that actually has value is Windows and NVDA. NVDA is free in both senses and kind of the industry standard for sighted testing now. Jaws is still more popular, but this is slowly shifting in NVDA's favor. This would be because Jaws costs roughly $1000 per user. The advantage of NVDA is that you can be sure all users can have it, and if it works with NVDA then it's very likely to work with jaws without too much more work.
But sadly you can't just test with one; in the end you have to test with all of them. Things like aria are nice if used correctly, but the aria spec doesn't say much about what screen readers have to do, and no one implements 100% of it. It's very close to the situation with needing to test on multiple browsers.
1. Make the inaccessible accessible through clever tools, relying on CV or similar. 2. Make tools that reveal to sighted devs how accessible their software really is, much like you've described.
These two approaches tackle the same problem from opposite ends and would hopefully meet somewhere in the middle. I view #1 as empowerment, getting back abilities that one has lost (or never had to begin with). I view #2 as awareness, giving the sighted visibility into where their software falls short in terms of accessibility.
I'm not sure which would have the biggest impact. But in my mind, I'd rather be empowered by technology. I'd be curious what other blind devs think, though.
I have been blind in the past for months due to an accident when I was a kid. Fortunately I was lucky enough that a brilliant professor was able to restore partial eye sight. Enough for me to be independent and to be a software developer by profession and traveling the world whenever I get a chance.
One of the things I found out was that it is very hard to explain to people to tell them what it is like to not see. One of the popular questions was "So what could you see?" Don't get that question often these day, but I generally asked them to think at how much they can see with their hands.
When you can't even imagine how it is like, going that one step further on imagining how blind people are able to navigate your application/web site is a step beyond even that. Right now you only have things like web accessibility standards and tests for that. It helps, but it is not the same as "navigating the app like a blind person".
If there's an easy way to test and experience your app/website then it will also be easier to get a requirement like that past a C-level exec.
Sorry that I don't have much to add at this stage as I'm in the middle of starting up a new product myself, but you are always welcome to contact me (contact info is in the profile)
tota11y uses Chrome's Accessibility Developer Tools. Deque maintains browser extensions that use their own open source engine [2]:
http://www.deque.com/products/axe/#aXeExtensions
Both engines could also be used in CI environments to perform a11y audits. That should help web developers target at least low hanging fruit.
An idea, perhaps, but that's all it would be rather than real-world data. Blind people don't actually use these tools.
By the time someone reaches 50 there's a good chance a proportion of text on their phone, web, computer and even groceries that is becoming unreadable without glasses.
Most app developers haven't a clue. Most of us 50 somethings hadn't a clue 10 years ago! Ctrl + in a browser is a brute force solution. Android's is even worse and a lot of what you want zoomed simply isn't, but it enlarges the parts you can read just fine.
Compared to many of my age I'm lucky and rarely need glasses, but already it's very annoying!
Something like this could be as helpful as when I first saw colour blind simulators 20 years or more ago.
Sure you can play around with the screen turned off to get some sense for the experience, but with the screen turned on you can compare the visual experience with the non-visual experience.
Another issue with testing with the screen turned off is - if an element isn't accessible, how are you going to know that it should have been there...
The tunable part should be made easy to do from a user perspective via some kind of "dial" "mechanism"
First off, you're 100% correct when you talk about how devtools are inaccessible. This problem is an historic one, stretching back as far as early versions of Visual Studio, or other early IDEs on Windows. Basically, the people who build the tools make the tools for themselves, and not being blind, make inaccessible by default tooling.
I do most of my work on Windows, using the NVDA screen reader, and consequently I have the ability to write or use add-ons for my screen reader to help with a variety of tasks[3]. This being said, this always means more work for equal access, if access is even possible.
I'm interested in any sort of collaborative effort you might propose. Targeting accessibility issues in common devtools does seem to me like a reasonable place to start attacking this problem. I had read a few months ago that Marco Zehe, a blind developer at Mozilla, was pushing some work forward for the Firefox devtools[4], but haven't heard much about that recently, and I think they might be distracted trying to get a11y and e10s working together.
Basically, I'm interested in helping in any way you might suggest, and from the thread here it looks like there are some enthusiastic people at least willing to listen. My email is in my profile, let's make something awesome.
[0] https://GetAccessibleApps.com
[2] https://www.indiegogo.com/projects/nvda-remote-access/
[3] https://github.com/mohammad-suliman/visualStudioAddon
[4] https://www.marcozehe.de/2016/01/26/making-the-firefox-devel...
If it's closed source like a feature in Visual Studio or some other company, I'll volunteer to ask them for it.
Maybe low hanging fruit is the easiest way to convince people at first.
If you have an interest in Braille and have software development skills there might be something to do there. The UI program that drives our prototypes is open source and available on GitHub. https://github.com/Bristol-Braille/canute-ui
We have plans to open source the hardware as well.
If you want to add support at a lower level, our current USB protocol is outlined in this repository. It is a a dev-kit I knocked together to allow some Google engineers to write drivers for BRLTTY (and thus for ChromeOS). https://github.com/Bristol-Braille/canute-dev-kit
For any developer, it's important to practice your craft, and when looking for a job, it's valuable to have a portfolio of work you've contributed to. So you can get multiple benefit by helping create a tool which will help you be more productive, and also show your skill.
Clearly, this project should be something that you're passionate about, but one project I've had on my when-I-have-time list is below - I would be happy to work with others who are interested (@blinddev @ctoth @jareds).
After your text editor / IDE, one of the next most important tools is a tool for tracking bugs/tasks. Unfortunately, many of the common ones, like VSTS, Jira, and Trello, are either not accessible, or at least not productively usable with a screen reader.
Over my career I've developed my own scripts for working with such systems, but it would be good to have something that others can also benefit from. I should probably put my initial bits on Github, but time is currently consumed by other projects. Email me if this interests you. Also happy to mentor on general career challenges around being a blind software engineer.
I would be very interested to learn how visually impaired developers such as yourself and others got started, and for any suggestions for how I can make my student's experience more positive.
Thanks.
Further down the line, the student might find that programming is no longer so new to them that they can afford to explore something else. But at the moment, if they're a beginner, trying to learn a tool like an IDE, which is supposed to make your life easier but generally has the opposite effect for blind devs, is just going to confuse matters. So I would stick with what you're doing, because you probably won't get any better support from the disability services team at the university because it's not an area they know.
Your student needs to learn, sometimes the hard way, that if you're blind and want to make things, you have to be prepared to do things in ways which go completely against the grain - having to basically use Windows to be productive is one of them - and to solve these boilerplate accessibility problems without becoming discouraged.
The world can certainly use more open source accessibility standards, protocols and tools.
And have started a website to provide a resource for software development and accessibility: https://blinddev.org
Both are works in progress. If interested in contributing feel free to reach out, my email is in my profile.
As a partially sighted developer, I generally use a screen reader for web browsing and email, but read the screen visually for my actual programming work. So I don't have significant first-hand experience with the accessibility (or lack thereof) of development tools. But some of my totally blind programmer friends have expressed some frustration with the accessibility of some tools, especially web applications. They generally use Windows with NVDA (http://www.nvaccess.org/). At least with NVDA, you can write add-ons (in Python) to help with specific applications and tasks.
The posts do not necessarily have to be long.
For example, could you read this article and then give an overview of the main issues of web site performance? Could you then come up with one recommendation for a performance improvement in a code base you're familiar with? Could you justify in practical terms why your recommendation was the best bang for the buck, vs. other other possible improvements? https://medium.baqend.com/building-a-shop-with-sub-second-pa...
Now, how do you judge yourself?
1). Have the conversation with a dev whose skills and opinion you trust.
2). Record your answers on audio, and ask someone on HN to give you fair and constructive feedback. Many here would be glad to do this (feel free to ping me as well).
dlang.org
I am not blind, but I designed it to operate without looking at screen. If the app will take off, I'm considering into forking/pivoting into RSS reader that also is not using screen. App is already accepted in the app store, I'm sorting out launch details.
Please accept my deepest apologies for the shitty job we (the developers ) are doing at providing interfaces for vision impaired.
Probably when we're all old, we'll have vision problems of our own :).
I too find frustration with some of the tools with which I work. Although they may slow me down, they seldom create complete barriers. Most of my work at this point in time is with PHP and Javascript, so this may help the situation, I am less familiar with the current state of affairs of the accessibility of developing with other languages.
All of the complaining I do about JIRA aside, I do find it to be a reasonably usable tool for what I need (page load times annoy me far more than accessibility issues). There are some tasks that I cannot complete (reordering backlog items), but I collaborate with team members, which can help us all to have better context about the rationale for changes.
Gitlab I find quite poorly accessible, but thankfully it is just a UI on top of an otherwise excellent tool (git). I find that the same trick that works with evaluating GitHub PRs works with Gitlab MRs. If you putt .diff after the URL to a PR or MR, you can see the raw output of the diff of the branches being compared.
Debuggers are definitely my biggest current pain point. I tend to use MacGDBp for PHP. This is quite reasonably accessible. It allows me to step through code, to see the value of all variables, and to understand the file / line number being executed. It isn't possible to see the exact line of code, so I need to have my code editor open and to track along.
I haven't found a very accessible Javascript debugger. For Javascript and DOM debugging I still find myself using Firebug. I use lots of console.log() statements, and would rather be able to set breakpoints and step through code execution. That being said, other than "does this look right?", I find there is little that being blind prevents me from doing with Javascript. As recently as last night I was squashing bugs in a React app that I am helping to build for one of my company's customers.
I'd be happy to learn more about any projects you take on to improve web application development tools and practices for persons with disabilities. Feel free to reach out on LinkedIn if you would like to talk.
I'm mostly responding to encourage you to keep at it, and if you haven't tried Mac OS, maybe give it a whirl. Apple is pretty good about accessibility and their accessibility team is very good at accepting and acting upon feedback.
[1]http://mashable.com/2016/07/10/apple-innovation-blind-engine...
My sight issues are not comparable to being blind, but as an example, I've asked Pandora for simple accessibility improvements for years and they never take action. Have even offered to write (less than a page) the code for them.
Would they (and software tool vendors) feel the same way if this were highlighted on a high traffic web page?
A super-rudimentary basic version will be something I finish when I've the time in the coming months. I was hoping to get some interested from the blind community and get ideas for further OSS work involving that general space (editors).
Another interesting idea: try using braille screen for ourselves, so we as dev's will be able to work at complete darkness without any light :)
Send me an email; my address is in my profile.
Thanks @blinddev.
I was feeling pretty down and wanted something positive to come out of it. Glad to see I'm not alone.
I would be happy to help.
Do let me know how to contact you.
I'm a seeing student with an upcoming six week block of time to do a out of school project. I have previous experience developing accessable software and would love to work with you. If you're interested, shoot me an email at eliaslit17@gmail.com
I'm not sure that using tools that try to provide a good visual experience is the right approach. Have you tried writing scrapers that provide an optimized textual representation?
Yes, we should all make our sites/apps accessible.
No, some sites/apps are great exactly because they offer a better visual representation that allows faster parsing of the information presented to the vast majority with adequate eyesight.
Just because someone develops a great way to visually take in information, that person should not be forced to develop an equally great textual representation.
[1] http://github.com/jscheid/kite [2] http://emacspeak.sourceforge.net/
FYI, I sent courtesy invitations to nine people who said in this discussion "shoot me an email." One email address provided here was invalid. One or two other people who said "email in profile" did not have an email in their profile. If you want an invitation, contact me (talithamichele at gmail etc).
If you do contact him please blame me so he can shout at me, not you, if I made the wrong guess here.
See our first demo: https://blockly-demo.appspot.com/static/demos/accessible/ind...
Right now, it is effectively a different renderer for the same abstract syntax tree. We'd love to see people evaluate the direction we are currently going, and possibly apply the same accessible navigation to our existing render.
In terms of dev tools, Blockly blocks are usually constructed using Blockly (https://blockly-demo.appspot.com/static/demos/blockfactory/i...). That said, no one has considered what it would take to make our dev tools blind accessible. The fundamentals are there.
Granted, Blockly programming is far from being as powerful as other languages. It is aimed at novice programmers, whether for casual use or to teach the fundamentals of computational thinking. You can write an app in Blockly (http://appinventor.mit.edu/).
If anyone is interested, reach out to us: https://groups.google.com/forum/#!forum/blockly
There is also a fair amount of research out there on the topic (see Richard Ladner at UW).
Feel free to send me an email if you get anything going!
You will need to have the package texlive-fonts-extra installed.
You could want also to contact with the maintainers of brltty, cl-brlapi, ebook-speaker or brailleutils
I'm asking because though I'd love to help I know I won't be able to commit to it full-time. So it would be great to be able to follow up and get an idea of where the project is going, what areas it is tackling, etc.
Also, maybe a "Show HN" could help spreading to a wider audience whatever you set up.
Posted to HN here: https://news.ycombinator.com/item?id=12841156
I would love to learn more about how you would like development tools to support you in your work.
I know as an industry we have a long way to go, and I would love to work with you to get us there.
My email is in my profile, and I will also reach out to Talitha. Hoping we get a chance to chat.
You might want to read this: TOOLS of Blind Programmer https://www.parhamdoustdar.com/2016/04/03/tools-of-blind-pro...
Hope this can help.
Clients want sites that implement current SEO best practices. What sort of best practices are those? A Yoast SEO plugin, maybe. Developers often mention the URL structure of the site itself, say it's "clean." This might be appreciated by future admins of the site, but it's unrelated to the goal of making pages that can be scraped.
It surprises me developers and SEOs overlook the difficulty of scraping the web. Keyword density does very little to help a page that cannot easily be serialized to a database. It's true that machines have come a long way. Google sees text loaded into the DOM dynamically, for example. But its algorithms remain deeply skeptical of ( or maybe just confused by ) pages I've made that make a lot of hot changes to the DOM.
And why wouldn't it be? I ask myself how would I cope with a succession of before and after states, identify conflicts, and merge all those objects into a cached image. Badly, sure. At this point, summarizing what the page "says" is no longer a deterministic function of the static page. Perhaps machine learning algorithms of the future will more and more resemble riveting court dramas where various mappings are proposed, compared to various procedural precedents, and rejected until a verdict is reached.
I wasn't very good at SEO. I found web scrapers completely fascinating, I spent way more time reading white papers on Google Research and trying to build a homemade facsimile of Google. Come to think of it, I did very little actual work. But I took a lot of useful lessons that have served me well as a developer.
I realized, for example, how many great websites there are that are utterly inaccessible to the visually impaired. With very few exceptions, these sites inhabit this sort of "gray web," unobservable to the vast majority of the world's eyeballs. The difficulty of crawling the web isn't simply related to the difficulty of summarizing a rich, interactive, content experience. They are instances of the same problem. If I really wanted to know how my site's SEO stacks up against the competition, I would not hire an SEO to tell me, I would hire a blind person.
Puns aside, Who on earth would make a blind person work on UI? I think it's better that you parted with them, even tho I'm sorry you have trouble finding a good job.
Best of luck.
Firstly, just offhand, the following stacks should be fully accessible with current tools: Node.js, Rust, Python, truly cross-platform C++, Java, Scala, Ruby, PHP, Haskell, and Erlang. If you use any of these, you can work completely from a terminal, access servers via SSH through Cygwin or Bash for Windows, and do editing via an SFTP client (WinSCP works reasonably, at least with NVDA). Notepad++ also makes a perfectly adequate editor, again with NVDA; I'm not sure about jaws if you're using that.
GitHub has a command line tool called hub that can be used to do some things, and is otherwise pretty accessible. Not perfect, but certainly usable enough that NVDA (one of the most popular screen readers) uses it now. Many other project management systems have command line tools as well. If you write alternatives to project management tools, you will have to convince your employer to use them. Replacing these makes you less employable. You need to work to make them accessible, perhaps by getting a job on an accessibility team.
The stacks you are locked out of--primarily anything Microsoft and anything iOS--can only be fixed with collaboration from the companies backing them. Writing a wrapper or alternative to msbuild that can let you do a UWP app without using the GUI is not feasible. I have looked into this. Doing this for Xcode is even worse, because Xcode is a complicated monster that doesn't bother to document anything--Microsoft doesn't document much, but at least gives you some.
I imagine this is not what you want to hear, but separating all the blind people into the corner and requiring custom tools for everything will just put us all out of work. if you're successful, none of the mainstream stuff that cares even a little right now will continue to do so, and you'll end up working on blind-person-only teams at blind-person-only companies.
0: My most notable Rust PR is this monster: https://github.com/rust-lang/rust/pull/36151 1: https://github.com/camlorn/libaudioverse
A possible analogy might be crossing a busy intersection. Someone made pedestrian cross signs audible, allowing a pedestrian to know what the walk sign says at any given time. But this is an enhancement of a pre-existing technology. That blind pedestrian will still likely require a white cane, a blind person specific tool, in order to cross the street. I think there's room for both in software development.