There are many lambda calculi, ranging from the untyped lambda calculus with strict evaluation underlying Scheme to the Calculus of Inductive Constructions which provides an alternative foundation for mathematics to set theory and is the basis of the Coq theorem prover which has formalized proofs of the Four-Color and Feit-Thompson theorems.
"All these physics problems are near trivial once you've laid out the right differential equations and solved for the equations of motion. Probably the easiest route is to use the Leibnitz notation to solve f=ma for various time-independent force functions.
"There are many formulations of (classical) mechanics, ranging from ordinary linear differential questions to the use of Lagrangians. The latter is an alternative foundation for mechanics which was useful in the evolution of both classical e&m (Maxwell's equations), quantum mechanics, and even statistical thermodynamics."
The problem that I have with LC is not that I think it's useless - I have too much respect for Alonzo Church to believe that. Heck, I've even read Goedel Escher Bach, and enjoyed large swatches of it! What gets me is that people run around singing the praises of the lambda calculus, but when you ask about what it's really good for, you get more formalism.
I'm beginning to suspect that the whole thing is an intricate practical joke, that lambda calculus is fundamentally so self-referencing it really doesn't have anything to do with anything, other than itself, which is everything.
But, despite being a general-purpose programming language, it's not very practical to use as one. This is simply because the lambda calculus is so minimal, and we have more sophisticated and featureful programming languages to solve problems with. So while theoretically the LC can compute anything, programmers turn to other languages instead.
So what is the LC used for, if not for writing programs?
Today, it's used as a basis for research into programming languages. PL researchers will often take the untyped LC and extend it with some feature. This allows that feature to be explored in isolation, and provides a rigorous, well-understood platform for writing proofs. This is how we wound up with the various typed lambda calculi for exploring type theory (e.g., the simply typed LC, System F, etc.).
GHC actually reduces Haskell into a typed lambda calculus known as System FC, also known as Core. It uses the Core representation to perform most of its optimizations. I suspect that having a library of proofs about System F available helped quite a bit with implementing the optimizations.
The reason students still learn and use the infinitesimal calculus is because it's still one of the best tools we have for certain problems. The reason students don't learn the lambda calculus is because we have better tools for many of it's applications (pick just about any other general-purpose PL). But if you talk to students of type theory, they'll tell you that they did learn the LC and that they use it quite a bit in their research. I think someone already mentioned Pierce's Types and Programming Languages, which is a really good introduction to the topic, starting with the untyped lambda calculus and gradually building upon it. If you're genuinely curious about the stuff LC is used for, that's the place to start.
I reckon that for most programmers, the lambda calculus is nothing more than an intellectual curiosity, but for PL researchers it's still a useful tool -- a useful formalism for exploring and demonstrating properties of programming languages and type systems.
As you probably know, there are two major domains where lambda calculi appear: computability (which I know next to nothing about) and formal methods/type theory/PLs. The people talking about "showing functions can't be computed" are referencing the former. From the formal methods view, a calculus really consists of a syntax (the set of allowable terms; this is like the state space) plus a dynamic semantics (which is like the dynamics), and optionally a static semantics (e.g., typing rules). I.e., we have the following analogy (which holds best for small-step semantics; see below) between lambda calculus and physics:
semantics : programs :: differential equations : physical states.
A lambda calculus is a calculus for programs with first-class functions :)(While ML is basically a compiler for a lambda calculus, you can even define calculi to prove things about Java if you wanted.)
Now, to answer your questions. Let's say our set of terms (our syntax) is
term := x -- var
| \x . t -- lambda abstraction
| t1 t2 -- application
| n -- natural number
| div n1 n2
and let's call a term a value if it's "fully evaluated" according to our not-yet-existent semantics; i.e., either a number or a lambda.We clearly expect the function defined syntactically by
f := \x . div 1 0
to diverge when applied to any argument, so any semantics for our calculus should capture that somehow.A denotational semantics would define a semantic function S[[ ]] taking our function into the mathematical (partial) function x |-> 1/0. We would also require that S is a homomorphism, so S[[ f x ]] = S[[ f ]](S[[ x ]]), which indeed is undefined in the usual mathematical sense.
A big-step operational semantics is similar to a small-step operational semantics and so I won't describe it except to say that it rather resembles an implementation of an eval/apply Scheme interpreter (but as a purely mathematical relation on terms, and usually using the fundamental LC idea of substitution instead of an "environment"). A big-step semantics often suppresses both nontermination and "bad" states arising from nondeterministic rules (think concurrency).
A small-step operational semantics is closest to the notion of a dynamics in physics (especially in the discrete case of difference equations). Let's define one. We have choices for what "div n 0" should be: it could be some arbitrary value of our choosing (or we could extend div with an extra argument to give as the answer in this case), we could "get stuck" (i.e., not allow any transitions out of such a state), or we could add an error state to the the syntax and evaluate div n 0 to error (this generalizes pretty easily to try/catch, assert(), ...). The semantics are (technically we are defining a relation called -> on terms):
If mathematical division gives an answer, so does our division program:
n/m = p
------------
div n m -> p
(I've evilly conflated mathematical numbers with their syntactic representation in our language.)We also need more reduction rules:
t -> t'
---------------------
div t t2 -> div t' t2
t2 -> t2'
---------------------
div v t2 -> div v t2'
If we had added an error term, we'd also add the following rule: ----------------
div n 0 -> error
(Exercise: what happens if we wrote "div t 0", as I did in an earlier edit?)Application proceeds by reducing the function first (we'll either end up with a lambda or a number, which we could avoid by typing our calculus):
t1 -> t1'
---------------
t1 t2 -> t1' t2
There are more rules for substitution and for reducing the argument (the usual choice gives strict semantics, but other choices are possible).Then we prove certain things about the language using induction, usually induction on derivations (just a special case of well-founded induction). We can't prove much about an untyped calculus, though.
Now your first two questions should be clear, e.g.:
f v
-> div 1 0 -- by substitution rule
stuck! (i.e., no rule applies)
If we'd added an error rule, we'd get instead -> error
There's a bit more work to show f t gets stuck (gives error) for all terms t - maybe an induction or something, right? But wait! That's not actually true! What if t is a nonterminating function call? What about f x (x is a variable)? Then we have a free variable in the program, and again get stuck!For your last question, you could add recursion either by hand (look up "fix") or with the Y combinator, then show that (again let's look at values first, since an arbitrary term might error in a different way):
f v -> .... -> f v
in only a few steps. Then, we'd use our proof that "->" is deterministic (i.e., a partial function from terms to terms) to conclude that f v never reduces to a value, since otherwise one of the terms in the loop above would be a value also, but none of them are. qedThere's also the formal/mechanized logic/proving aspect (omitted for brevity). It's nice that it's easy to study lambda calculi with theorem provers since, as you can see, it's easy to forget a rule or make a small mistake in a proof.
(I'm being quite imprecise here: I haven't shown what rule induction is or why it's valid, or how the rule definitions work, or ... )