So baked into this is lack of support for clients without javascript.
I guess I understand if this is a conscious lack of support for the many lynx/noscript/etc users out there (I'm not going to do that myself and so the architecture is not an option).
But what about search engine crawlers, hasn't javascript + search always been an issue?
http://redhanded.hobix.com/-h/hoodwinkDDayOneForcingTheHostT...
And there was another framework that did essentially the same thing, using DHTML and AJAX to create the page from a blank slate. But I cannot recall the name.
HAppS (a Haskell framework) has also advocated this:
"HAppS does not come with a server-side templating system. We prefer the pattern of developing static web pages and using AJAX to populate them with dynamic content."
The revenue generated by supporting clients with disabled Javascript is not increasing at nearly the rate support costs are.
I know many technically apt people get up in arms over this, but there comes a point where going into your browser settings (which 99%+ of users will never do), scrolling down to the section marked I Hope You Know What You're Doing, and unchecking boxes means you are affirmatively opting for a second-class experience.
I know the rejoinder: "Blind people can't use your site, you heartless bastard!" It is highly likely that my site and software will be suboptimal to them. It is also highly likely that my site and software will be suboptimal to people who, through no fault of their own, are illiterate. Both of these are tractable issues if someone wants to throw sums of money which are many multiples of my budget to fixing them.
I have yet to hear a good reason for why that someone must be me.
[Edit to clarify: this is not specifically related to the site I have in my profile, but it could be very easily.]
I do think that it's important today because there are still web browsers, especially in the mobile space, where people will not have javascript or more specifically AJAX available.
I think it's especially pertinent when talking about social anything kind of sites, where people are going to be likely to try to access it from a mobile device.
I don't see it as an issue of responsibility as much as I see it as a customer service issue.
You don't have to "fake the presence of comparable functionality" (and in most cases you simply can't).
What you can and should do is support the subset of functionality raw HTML and CSS sans Javascript can achieve. No one will blame you for not supporting highly dynamic features that can't possibly be achieved without javascript without javascript.
"[..]web browsers are not the only clients that will use Urbantastic. Mobile devices, search engine spiders, screen readers for people with disabilities, and RSS readers all need the same data but in different forms. Accommodating any of these is simply a matter of dropping a different rendering front-end in front of the common JSON data server."
It's not up yet, but I'm working on a front end intended for non-javascript users. It will also serve blackberry users, IE6 users, and spiders.
It will be a /much/ simpler site, but you'll be able to get everything done on it. I figured it's easier to separate it out than try to shoehorn every use into one format.
The general principle is that I'm going to design for the large majority of the users - and use Javascript capability I can to make it an excellent experience. Then create a simplifed mirror for the minority uses. Gmail took this route and I think it's worked well for them.
Anyways, thanks for following up - this is a great way to handle clients without Javascript.
I'm going to go out on a limb and say that maybe they should think about how the site will work without Javascript.
Wait ... so if he wants to populate the static HTML with information from a database, the client side javascript has to access the database directly? And his database is internet accessible/viewable? That seems bad ...
Without knowing the specific details, I'd imagine the json response has directives for what static html to load if needed, which results in more xhr requests to get those files. The client side js simply needs to know how to process the json it's given, it doesn't need to know any business/persistence info.
What I'm curious about is how he handles urls (if everything is xhr, then the url will always stay the same, which is kind of a pain for linking to specific stuff, unless you do anchor workarounds like Facebook does). Also, I'd be curious if he uses the static html files as templates (injecting data into them clientside) or just has a TON of tiny html fragments.
I expect that eventually this will cause too much of an up-front load time, so I'm planning on having the JS load bundles of it on demand. Reducing total HTTP requests is a big usability win, in my experience.
To answer your URL question, I use attributes, like this:
http://urbantastic.com/org.html?id=org-8srmt85mtf8t
Which the server ignores, but the Javascript parses and uses to figure out where it is.