At my university, we keep running workshops, there are student staff in the library willing to help upload articles to the repository if you just e-mail them, etc, but still, most academics won't take the five minutes to do this, even if they have the right.
This doesn't mean that the academic publishing system shouldn't change, it absolutely should. And there's also a lot of value in "liberating" academic publications that would otherwise not be free. But I hope people would become more aware of what is already possible, and legal!
Most in my field at least allow you to put postprints (the final version of the paper, but not formatted by the journal's typesetters) online, although there are a few stragglers who don't let you do anything.
Of course, it would be much better if people started embedding machine-readable metadata in PDFs (totally possible, see for example http://code.google.com/p/pdfmeat/), and if there was some agreed-upon format for bibliographic microformats, that could be embedded in websites listing articles.
We also eventually need an open alternative to Google Scholar. GS is great, and I use it every day (and love that you can output BibTex for example), but it has no API (and will never have one because of deals with publishers), actively resists automatic access, is a black-box in terms of how data is gathered, etc. Think of "Open Scholar" to Google Scholar as analogous to OSM vs GMaps. OSM might not look as pretty, or be as consistent in the beginning, but it enables a whole range of applications that GMaps doesn't. (And at least GMaps does have a fairly good API, even if it charges for overuse, GS has nothing).
(These are just some thoughts I've made, as I've been experimenting with an open scholar workflow, trying to share as much of the "byproduct" of the research, including rich notes and summaries, my own bibliography with links to OA pubs where they exist etc: http://reganmian.net/wiki/researchr:start).
Another thing I've found working on my project, where I try to expose OA links to as many pubs as possible, and regularly rescan to see if they are still available (and still OA), is how quickly documents disappear... Hosting on private pages is convenient, but fragile. Ideally, people would upload papers to university repositories, subject repositories like Arxiv.org, etc.
But ... its a score to jstor. It's unorganized.
But ... science if full of noise and crappy publications these days anyway. Lots of ways to do the same thing, unprooven and only exists because everybody has to publish to stay relevant.
Now: How to really improve science ? My suggestion: A big python framework for each field of study. That has implementations of the real algorithms and models for comparison and benchmarking and even real life implementation.
See as example in the robotics field, ROS ( Robotics Operating System) . Ros is like a basis glue framework where universities and individuals can publish their code. Its decentralized, it has simulators so that scientists do not need to own the physical robots and can even compare(diff) results and algorithms in a very fast way.
The simulator can have a embedded browser + wiki + quora that explains X.
evolution: physical paper -> PDF -> simulator.
A framework like that would be awesome, but that has a different meaning from the collection of personal pdf posts/uploads each individual on Twitter contributed.
For smogzer, there is some interesting research on how knowledge from the research literature can be better represented. For example using concept mapping, see this great paper by Simon Buckingham-Shum (who has many others): http://oro.open.ac.uk/6463/1/kmi-04-28.pdf. Anita de Waard has given many presentations on semantic and executable papers, for example http://www.slideshare.net/anitawaard/executing-the-research-....
I'd imagine competing against an organization that's been around for 18 years won't take 6-7 hours of coding. :) It's an MVP, one I'm quite embarrassed about. But I'll continue coding. Someone suggested an open Google Scholar approach - that's one direction this project could head in.
From the 'About' section of the website, you can see it uses PDFtribute.net to help scrape links.
R.I.P. Aaron Swartz!
Also, some metadata aggregation (title, author, tags, date published) capabilities wouldn't hurt anyone.