It's also great as the core of a programming language. If you can express a feature just in terms of the λ-calculus, you can implement it almost for free. It's just desugaring. Most of Haskell is like this: while the surface language has become rather complex, the majority of the features boil away rather transparently into typed lambda terms. Haskell uses a variant of System F which it actually calls "Core" to underscore how core this concept is :).
Of course, as the accepted answer points out, the λ-calculus is also very useful for logic. In some sense, it is a sort of "proof essence": the basic building block of what a proof is. I really like this notion because it gives me a concrete, first-class object to represent a proof. Seeing proofs as programs really helped me come to terms (heh) with mathematical logic.
One surprising use for the λ-calculus outside of CS is in linguistics. In particular, linguists use typed λ-calculi to construct formal semantics for sentences. It's a way to build up the meaning of, well, meaning. Or at least the things we say ;). I'm not super familiar with semantics in linguistics, but I've always thought it was very cool that they also use some of the same tools as programming language theory! Wikipedia has more about this: http://en.wikipedia.org/wiki/Formal_semantics_%28linguistics...
(1) The λ-calculus is better thought of as the MVP of SEQUENTIAL programming languages. It is not a good formalism for concurrency. Thee π-calculus is more suitable for concurrency, and, in a real sense, subsumes λ-calculus (see R. Milner's "Functions as Processes" and its subsequent elaborations for details).
(2) Proofs = λ-terms works best for CONSTRUCTIVE logic. With classical logic where not-not-A = A, it kind of works, but not really well.
(2) True, it works better if you add continuations.
I suppose this underscores how some people think algebraically while others think analytically. I am an algebraic thiner. I like to break apart algorithms into a mathematical calculus to understand them.
Humorous: http://bentilly.blogspot.com/2010/08/analysis-vs-algebra-pre...
What does this phrase mean?
More importantly, since your new language is small, you can eyeball the implementation and be fairly confident that the implementation "matches" your formal definition (about which you might prove theorems such as type safety).
Contrast this with defining a new feature as an extension of C++, Haskell or Java. You'll invest a lot of time, and in the end might not even be very confident that your implementation matches your theory. So you might write a few hundred line program in your new programming language, get a wonky result, and not be sure if the formal definition is flawed or if the implementation is buggy.
The "implement almost for free" phrase takes on a different meaning if you decide to implement computer-checkable proofs about your language.
What is the equivalent computer science problem that lambda calculus can help you solve? Challenge: pretend like you're Richard Feynman and avoid jargon, if at all possible.
EDIT: I find it quite curious that I got so badly down-voted (-3 and counting!) for simply asking for concrete examples of applicability to actual, concrete problems. I've always found that tools are best understood in the context of their use. Even an abstract concept is useful to speak about in this way - for example, complexity/Big O analysis helps us with capacity planning, comparing algorithms, and so on. It may be that lambda calculus helps us with, oh, decomposition of computation or something like that. But for all the digital ink I've read about it, it's always seemed like an academic form of name-dropping. Even the name is intimidating, right? Reminds me of terms like "schroedinger's equation" or "canonical ensemble" from the good old days in physics class. But behind the intimidating names is just a tool for solving problems - and I have yet to see anyone demonstrate this for lambda calculus. Granted I haven't looked very hard!
It takes self-awareness to realize that you are enthralled with something without understanding it. The litmus test for this is the umbrage taken by someone who's asked simple question about what their high-status concept is really used for. That's why I mentioned Feynman in particular, because of his wonderful reputation as having knowledge that was totally grounded in reality.
A re-statement of your example is that differential calculus gives you the definitions of limit and derivative, and it helps you with real world problems that are well modeled by differentiable functions. It also tells you that some functions are not differentiable.
Lambda calculus starts with a definition of what is calculating, and then it tells us that the answer to big question is "no", some functions cannot be calculated. But it also tells us that whatever can be calculated, can be done with a very, very simple programming language.
Edit: I don't think javajosh's question is unreasonable - not sure why it is getting downvoted.
In explaining things, I find it very important to deflate intimidating-sounding jargon; and discover for oneself if something really is important, or whether it has more of a dysfunctional social purpose like mystery-of-the-priesthood for initiates.
You could also look at the answers given in the linked stackexchange page, of course.
First you need to teach them what the lambda calculus fundamentally is and why it is useful in small bits. Then you can explain to them how powerful it gets, by explaining how the usefulness is extended by the Curry-Howard isomorphism and how you can build the Hindley-Milner type system with it.
The commenter is an 'engineer' and you're a 'theoretician'. As a fellow 'engineer', I completely understand his exasperation with all this pointing to either nice theoretical results or the large scale end products of the theory. That's just fundamentally not how we learn.
Other replies to this thread do a pretty good job at pointing out the applications of lambda calculus. I want to contribute this excerpt from one of the original papers on the topic:
"In this paper we present a set of postulates for the foundation of formal logic, in which we avoid use of the free, or real, variable... Our reason for avoiding use of the free variable is that we require that every combination of symbols belonging to our system, if it represents a proposition at all, shall represent a particular proposition, unambiguously, and without the addition of verbal explanations."
The motivation was a setting for doing "precise and unambiguous" mathematics. Mathematical results are used pervasively today, so the overall vision is obviously important.
Whereas The Calculus was developed in response to the difficulty of calculating physical quantities, such as those about celestial bodies, the lambda calculus had a more foundational motivation.
None-the-less, lambda calculus is tremendously useful for solving "real world" problems. Most of these problems arise where doing "precise mathematics" is important but tedious, difficult and/or time-consuming; or the related task of defining computations correctly.
So the lambda calculus has been used as a consistent and primitive basis for understanding computation itself and how it is expressed in programming languages. I suppose it is more "equivalent" to Newton's 2nd Law than any particular way to solve a problem. It's perhaps worth remembering that Principia used geometry, not what we know of as calculus, to clarify mechanics.
Say we do have a function f(t) that tells us the location of the ball with respect to time. Something like (I'll use JavaScript as my implementation language, as it is capable and accessible):
function location_x(t){ return 2 * t; }
function location_y(t){ return 3 * t; }
In calculus, we say that the derivative of f(t) with respect to t can be approximated by calculating the limit as dt approaches 0 in the equation "(f(t + dt) - f(t)) / dt". Since you imply you know linear calculus, I'll not explain the jargon there.Well, f(t) could be anything in that description, it doesn't necessarily have to be a function describing the position of a ball. So why don't we make f a variable? Why don't we write a general-purpose derivation function that can compute the derivative of any function given to it?
To do this, we need the ability to treat functions as "first-class citizens", as things we can manipulate just as much as we do primitive integers and decimal numbers.
That includes being able to refer to a function without evaluating it (i.e. storing it as a variable, able to pass it around, with no special knowledge of its purpose, just as we can pass integers to a function and it doesn't matter that the integer is 10 or 173 or -83491, we pass it to our function in the same way every time). We also need to be able to create functions on the fly (just as we might create integers other than the ones we typed in literally through arithmetic operations). We finally need the ability to capture the the values of any variable defined outside of our functions at the time the function was defined. This is called lexical scope, because it follows how we read the code, and functions that do this are called closures, because they "close over" the referenced values. Imagine the function hugging out further and further to accept everyone in. It's important that the values get captured and not the references, which we'll see why later.
Because of these three things, I could have also written my location functions as:
var location_x = function (t){ return 2 * t; }
var location_y = function (t){ return 3 * t; }
How do you write the literal value of an integer? "2". How do you write the literal value of a function? In JavaScript it's "function(args){statements}". A named function--as you are used to them in procedural languages--is really just a function value stored in a variable that has a name. Just like a named integer is just a literal integer stored in a variable that has a name. We could even have an array of functions! var location = [function (t){ return 2 * t; },
function (t){ return 3 * t; }];
Okay, so what does our derive function look like: function derive(f){ // f is an argument that accepts a function as a parameter
return function(t, dt){ // we will return a new function
return (f(t + dt) - f(t)) / dt; // the bare definition of a derivative! Nothing fancy!
};
}
This is where "closing over" gets important. I want the function(t, dt)... to have access to the value of f when it was created. A simpler example, say I wanted to make an array of functions: var arr = [];
for(var i = 0; i < 100; ++i)
arr.push(function(t){ return i * t; });
What do you expect arr[50](3) to return? If you said "150", you've been paying attention. Closing over the value of "i" allows us to do natural things with our code.Unfortunately, it's incorrect. JavaScript does unexpected things with scope, so it can break our natural understanding of it. Every one of these 100 functions will return 100 * t, because 100 was the final value of i that caused the for loop to break, and the i variable is still in scope after the for loop finishes. It's one of the ways in which JavaScript is fundamentally broken. In JavaScript, this example has to be fudged to work right:
var arr = [];
for(var i = 0; i < 100; ++i)
arr.push((function(_i){ // an anonymous function...
return function(t){ return _i * t; }
})(i)); // ...that we call right away
We are used to the for loop introducing new scope, but in JavaScript it does not. We have to explicitly introduce the scope. It's bad, but thankfully the ability to treat functions as first-class values allows us to fix it!Moving on, let's apply our derive function:
var velocity_x = derive(location_x);
var velocity_y = derive(location_y);
// evaluate a few values
var x = location_x(10); // x == 20
var dx = velocity_x(10, 0.0000001); // 1.999999987845058 which, for an approximation, is very close to the correct value of 2
"Oh, but 2 times t is easy to derive." You might say. Well, what if our location function described the ball moving in an ellipse? location_x = function (t){ return 2 * Math.cos(t); }
location_y = function (t){ return 3 * Math.sin(t); }
Then the derivative is exactly the same: velocity_x = derive(location_x);
velocity_y = derive(location_y);
And we will get reasonably good approximations for increasingly small values of dt.Now, try doing that in C. We live in blessed times where most of the popular languages have the basic building blocks of functional programming enough to support the lambda calculus. This didn't used to be the case.
Is your claim that functional programming is the lambda calculus? Is dependent on the lambda calculus? Is an implementation of the lambda calculus? Something else?
You seem to have some implicit assumptions or claims here. Stating them explicitly could let the GP know what it is that you think you're demonstrating.
http://research.microsoft.com/en-us/um/people/simonpj/papers...
It's interesting that Turing never rigorously proved Turing Machines were a model of computation, it was only an intuitive appeal. He actually apologised for this when introducing them, in his Entscheidungsproblem paper http://www.turingarchive.Entscheidungsproblemorg/browse.php/...
Also curious is that Church wrote to him, saying that he found Turing's model more intuitively convincing.
Intuitive appeal isn't everything of course.
The idea is that a lambda abstraction, in a math sense, is much less general than a single argument function. 'sin θ' is a single argument function that's defined by some business about the ratio of sides of a right triangle with angle θ, but it's not a lambda abstraction because all they're allowed to do is substitute a variable into some expression. 'f(x) = x² - x' can be directly expressed as a lambda abstraction (f = λ x. x² - x)
Once you guarantee that the function takes the specific form, you can manipulate it in ways you can't manipulate the general-case function, and it turns out those manipulations give you enough power to define almost anything else you'd want to do.
As it turns out, (IIRC) Python makes lambdas essentially opaque objects, and doesn't let you peek into them any more than you can a general-case function. This means you can't do any lambda-calculus on them, even simple stuff like determining if they are exactly the same expression.
In λ calculus, the only way one term can inspect another is by applying it. You don't get an intensional view of a λ abstraction. The other language features that get built on top of this (conditionals, tuples, etc.) only rely on functions' extensional features.
The other nice thing is that if you do base your whole language on LC then you get really excellent variable scoping without further questions. That's something that a lot of languages still struggle with (js).
http://python-history.blogspot.com/2009/04/origins-of-python...
That's the Python language designer saying he wasn't a huge fan of the name when it was introduced, because it didn't quite live up to the expectations it set.
The usage of 'lambda' to construct anonymous functions is just a reference to Lambda calculus, the actual thing is a formal system of operating on those functions.
Those are not the same thing as Lambda calculus -- it's just a bad naming decision for a Python language feature (short anonymous function construct).