That said, I think it's big shortcoming to me (havent spent much time with it, spent ~10 minutes going through some of the demos) is that its a functional language. Personally, I don't mind haskell/sml/their ilk, but since I was trying to conceive of a language that people would actually use/adopt, it needs to be imperative and probably resemble something along the lines of C/php/python.
I don't hear a lot of complaints from the Python or Ruby camps about how much they desperately miss static typing. It would have to be via a seriously 'get out of my way' type-inference for me to want to allow all that ugly back into the language.
And point 2. Performance? Again - don't hear much complaining about this for the vast majority of applications. I rarely find the bottlenecks to be in my web language constructs. Most people aren't writing another Twitter.
I can't speak for anyone else, but I do miss static typing when I'm using languages like Python and JavaScript for web work. However, I think it would take more than just type inference to make a strongly/statically typed language that was good for the same jobs. You also need powerful tools for parsing freeform input such as JSON or XML, both to bring valid input into your static type system with little effort and to give as much control as you need to recover from unexpected input.
The first problem is solved by many languages; the second one, not so much. I think that is part of why dynamic languages are so popular for web development today.
As an aside, there is also a design/architectural question here. The theoretician in me says of course I should parse and validate all incoming data as close to the point where it comes into my server-side code as possible, so everything internal is clean. This fits nicely with the whole static typing thing. On the other hand, the pragmatist in me says that sometimes, particularly while prototyping, it's useful to keep the parsing and error recovery logic close to where the data will be used. That's much easier if you can just dump all the input into a nested array of hashes of objects of dictionaries of widgets when it arrives and worry about the details if and when you get to the code that cares.
Perhaps I'm not sure what you mean exactly, but I've been using frameworks and tools that handle JSON de/serialization for years now. For example, whenever building applications in ASP.NET MVC and ExtJS on the client side, I would use a project called Ext.Direct.MVC. All that does is set up a handler that automatically grabs specific types of requests, converts the JSON data in the request to an instance of a model, or any data structure that you expect to receive, and passes that created instance over to your controller automatically for you. So on the client you just call the controller with some JSON, and on the server you declare a controller that receives an instance of one of your defined models. That's it, you're done. The only time you'd have to so much as interact with the JSON serializer is when you wanted to return some data like a model -- but all this means is wrapping the model you're returning in a call to the JSON serializer and you're done.
Also, if your selection of languages/frameworks does not offer a tool like this, it likely does offer enough sub-components for you to be able to create a system like that in a day or two. EDIT: (or perhaps I am just making a bold claim here assuming that all language communities have at least one JSON serializer/parser as awesome as Json.NET).
In my personal opinion I'd actually say the opposite of you and claim that often times a language's type inferencing could be better. It's certainly not perfect in C# (not when compared to Haskell, or even F#), and I'm not even sure if it exists for Java.
Perhaps you would if Python or Ruby were used more often for, and I hate to use this term, "Enterprise" type applications -- especially ones that have teams with 50 some developers. They're not though, and they're probably not for the fact that they're dynamically typed and that often causes chaos when there's that many people involved and communicating with them becomes a job in and of itself. The more you have to spell out things in your code the less likely another person will misunderstand its meaning.
Anyway, I personally would still prefer a statically typed language just for the added compile time safety. To me the "save + refresh browser + manually check if change works" process of development is way more tedious than having to work with type constraints that fail to compile if they don't make sense. That and decent type inference really helps in 90% of the cases where static typing seemed tedious to me.
What we need is a better way to use Internet as a platform on a large scale; sadly, none has been widely accepted so far.
The emphasis on components instead of pages is not strong enough in many other web frameworks like Rails.
And when I had to include some multi-line javascript in my code, I found myself feeling a huge loss. First, heredocs seemed to be the only way to make it readable. Second, I'd have to actually run the code and interact with the page to find out if I got the syntax right. It would be awesome if there was a way to make JS into an object in PHP, the way that XHP is done, and have it support some simple sanity checks and easily import JS components (which I suppose Javelin tries to do).
Haskell in general meets a lot (though probably not all) of the author's requirements. Static type inference is great, and Haskell's system makes explicit type declarations completely optional except for in a handful of rare cases, though you will find yourself wanting to use them on most functions anyway because of how they clarify and improve the readability of your code. Testing is also dead simple with tools like QuickCheck--it essentially manufactures test cases for you based on invariants that you specify about your code.
What I also like about Snap (and Yesod) is that it is integrated well with the enumerator package. Simply said, the enumerator package allows you to implement composable data sources, manipulators, and sinks. Since many web applications consist of extracting data from a source, manipulating, and sending it, it allows you to write applications short and simple.
What I'm working on is sort of an answer to node.js. It is a coffeescript platform (use js if you prefer) built on top of erlang. So, the coffeescript runs in an erlang environment. This means, when you call into the DB, this spawns off an erlang process. Your collection of coffeescript functions can be executed on any number of cores, or any number of hosts. In fact, your handler for a web request can be spread out all over a cluster, with each function running on the node that has the data... or it can all run on a single node, but across many processes. (to some extent the amount of distribution will be controllable as a configuration parameter-- so if you're doing processing that analyzes big data, you can move your code to the data and run it there, lowering the cluster communications load, but if the data is small, it may make sense to keep handling a request constrained to a single node where everything is conveniently in RAM.)
This is accomplished by compiling the coffeescript in to javascript and running the javascript on a vm, specifically erlang_js, though I'm looking at going with V8 via erlv8. Your code and the libraries are all rendered into a single ball of javascript that we'll call the "application" that is handed off to various nodes.
How do I plan to get sequential code to work in a fundamentally distributed environment? That's the $64,000 question and why I'm bringing this up here-- I could be doing it wrong.
The plan is simple: 1. The developer needs to know that their application is not running in a single environment and account for that. 2. Each entry-point provided by the developer to the platform's API is assumed that it could be running in isolation in a separate process. 3. There's a shared context that all the processes have access to. (an in-RAM Riak database where the bucket is unique to a given request, but the keys are up to the developer.) 4. The APIs let the developer give callback functions which will be called when the data is available. (EG: "Go fetch a list of blog posts" could have a callback that is invoked when the list is returned from Riak. 5. There's a set of known phases that each request goes thru, in a known sequence, and we don't move on to the next phase until the processes spawned by the previous phase are finished. All of the phases are optional, so the developer can implement as many as they want or only a single one. The phases are: init, setup, start, query, evaluate, render, composite, finish functions. The assumption is that you can get your app to work with 9 opportunities to do a bunch of DB queries and get the results. 6. Init will be called when the request comes in. Init can cause any number of processes to be started (DB queries, or map reduce, etc.) They will all be finished, and their callbacks called (if any) before setup is called. Setup can also spin up any number of processes, and so on. All of these are optional and a hello world app might just implement one (it doesn't matter which.)
So, the developer can write in a sequential style, they are called regularly in sequence and know for each phase the previous phase's queries will have data. Each phase can cause more queries, or even spin up other apps, that will be rendered before the next phase. And they get the results from a context that is always available.
This way, init, start ,query and render could all run on different nodes, though they would run in sequence and each one would have access to the shared context for the query.
Another way of looking at this, and the way it might be implemented, is that each of those phases is a long running process that lives on, and is invoked with different contexts each time to handle its part of handling a query. (So this lets us, or the developer, experiment with the right way to arrange things for best resource utilization, since the results can be dramatically different depending on the kind of work the application needs to do.)
That's how I'm running a sequential language in a genuinely distributed manner...you can think in callbacks, or in phases, or both, and your coffeescript really can run in parallel.
A downside of this, though, is that you couldn't write a request handler that, say, generated a random key, did a lookup on the database, and then would loop and do that again until it got a result it liked. You have your 9 phases, and that's it, for a given request. However, there is an API to invoke another application (e.g.: you could have a login application that is responsible for part of your page, so, rather than implement a login/logged-in area on each page, you write it once and include it as a sub-application.) Conceivably you could do recursion but I haven't thought about the consequences of that yet. This does sort of lock you into a specific way of doing things, which is why there are 9 phases, if you only need 3, only implement 3... but if you need all 9 you have them.
I'm sure I've managed to make something that is not so complicated sound muddy... This works for me, since coffeescript is convenient, and it is easy for me to think in terms of erlang concurrency... but it might be an adjustment for js programmers who are used to setting variables and expecting them to be there later on... (you'd just have an API that store the values under a key.)
If you're interested in this project, you can find periodic announcements on twitter @nirvanacore I expect to have an alpha sometime in late September, and a Beta sometime after Riak 1.0 (on which this is based) ships.
Apologies if it seems like I'm hijacking a thread here... obviously my thoughts are about concurrency, but I am differing from the author in assuming json for common data structures, and directly programming in coffeescript/javascript. I'm not too worried about compiled speed- I'm more interested in concurrency than performance. I'd rather add an additional node and have a homogenous server infrastructure and no thinking about server architecture... than try to optimize for single CPU performance, etc.
Sounds interesting.
It would be easy to have an API that is along the lines of "in the next phase, call this function, pass it this data". I could make an API that does that, or you could put the data under a key in the context, and then call that function at the beginning of the next phase. IF the set of functions you'd like to have called that way varies from request to request, they could be stuffed in a list under a key, and you just process each of the functions in that list.
I think it will be quite possible to provide something equivalent to closures, via an API, though I can't yet say how syntactically convenient they will be, but really not too bad, I don't think.
On further thought, I think it would be quite possible to do actor style message passing... I'm focusing a bit much on the mechanics of implementation right now, and not making this transparent, but the context could easily be used to manage a set of mailboxes and "processes", where, in each phase, or even between phases, whenever a message is available in a mailbox, the function that it was sent to gets woken up and executed. In fact, not function, but process.
So, I can add an API that provides an actor model interface. The actors can be identified by a process ID, they can send messages to each other (addressed by PID) and include arbitrary data, and this can happen in concurrently in coffeescript.