This requires changing the simple, composable functions that you could reuse forever, to complex ones that have to split the work into multiple stages.
Another source of inherent complexity which has nothing to do with the solution of the problem itself, is the handling of the input or runtime issues. Input cannot be guaranteed to be always coherent, and runtime issues may arise while running (user interrupt). At this point, interrupting the computation is easy, but as an user I want maybe:
* know why the function stopped, with a meaning answer (not just a stack trace)
* know the location of the error in the input data
* the possibility to resume the computation
If you've ever programmed functionally, you know how hard is to do something as simple as give meaningful error messages buried into a series of fold/map calls.
Keeping state is a simple (and I would say equally elegant) solution to this recurrent problem. Please consider that I'm saying this from a functional programming perspective (as in: objects as localized state containers not necessarily breaking the purity assumption).
Another issue is that we, as humans, are based on a stateful world. Stateful user interfaces are sometimes more efficient due to the way _we_ work. This can be seen in something as simple as "now select a file", which brings up a dedicated section of stateful code to navigate a tree. As such, you are constantly faced with the problem of keeping state.
Being able to resume the computation is possible, due to referential transparency - if you're able to serialise the monad and persist it to a file, you should be able to resume any computation at an arbitrary point in time. The Cont monad might also help in this case, but I've never used it myself: https://hackage.haskell.org/package/mtl-2.0.1.0/docs/Control...
Also, you can use monads to keep track of state in a functional manner - take a look at the State and ST monads.
What I'm trying to say is that you've indeed identified some of the sore aspects of functional programming, but these problems are not inherent to the programming model - they're all solvable in theory, with enough time and effort. It's a different paradigm, so some wheels have to be reinvented or adapted to it.
I agree with your last paragraph: stateful user interfaces are fundamentally better for us humans. Indeed, I think that mixing both imperative and functional programming is the way to go - have a stateful/imperative "chassis" around your functional "engine" and get the benefits from both.
http://research.microsoft.com/en-us/people/smcdirm/managedti...
Anyways, there are many other ways of fixing or transcending Von nuemann, pure functional might not be it.
In particular, what escapes me is what those videos represent. Why are they showing typing, rather than what some code does? Are you demonstrating a programming language or a live coding environment? The first two paragraphs seem to discuss the former, but the videos look like the latter. And if it is the latter, I can't figure out what the language is doing because I'm so hung up on the typing/deletion/live feedback.
The only way I've been able to get anywhere is by looking at the paper. You might want to link to the paper or some other overview rather than this page.
We just don't have the source code.
I understand what it says. I don't understand why
apply:<x,y> === x:y
doesn't work in the construct of functional programming?
Why limit the question just to functional programming?
This applies just as much if not more to imperative programming, at least in functional programming you have the option to execute any pure function in parallel on some independent chunk of hardware.
Whether imperative programming can be 'liberated' from the von Neumann bottle-neck is a much harder problem.
In the end both will still have to deal with Amdahl's Law, so even if you could get rid of the 'looking at memory through a keyhole' issue you're going to have to come to terms with not being able to solve your problem faster than the sequential execution of all the non-parallizable chunks.
https://web.archive.org/web/20131225040636/http://conal.net/...
> Why limit the question just to functional programming?
There was already an article about the general case :)
It only loads in browsers written in a purely functional language!
What Haskell seems to achieve, and I'm not an expert on it yet, is an incentive system that encourages small functions especially when you're "in" a monad, because of the "any side effects makes the whole function 'impure'" dynamic as seen in the type system (also known as "you can't get out of the monad"). Of course, it's sometimes a pain in the ass to thread your RNG state or other parameters (that remain implicit in imperative programming) through the function, and that's where you get into the rabbit hole of specialized monads (State, Reader, Writer, RWS, Cont) and, for more fun yet, monad transformers.
I think that the imperative approach to programming has such a hold because, at small scale, it's far more intuitive. At 20 lines, defining functions feels dry and mathematical and we're much more attracted to the notion of doing things (or, making the computer do things). And, contrary to what we think in functional programming, imperative programming is more intuitive and simpler at small scale (even if more verbose). It's in composition and at scale (even moderate scale, like 100 LoC) that imperative programming starts to reach that high-entropy state where it's hard to reason about the code.
The upshot of either thing is that you're heavily encouraged to shatter code into the purest composable fragments you can discover.
I think the "feels dry" problem can be solved by always writing functions in an environment rich with sample inputs and which passively calls your function for you! The text of a function would be the central component of a screen with all kinds of 'doo-dads' poking, prodding, testing, exercising that function in all kinds of interesting combinations.
What this does is underscore the concrete utility of a function, which is fundamentally tied to the application to concrete arguments (and it's ability to cooperate with other functions. Function application always results in a flurry of instructional von Neumann-esque side-effects; and I don't think that can be avoided without violating the Second Law.
I don't understand what you mean.