OO says everything is an object. Even though Java has non-object primitives, we're still gonna classify Java as OO.
> Lambda calculus does not have any evaluation rules.
> The lambda terms in lambda calculus are not inspectable objects, but more just a sequence of symbols.
It's not clear to me why this makes Lisp not in the family of Lambda implementations.
> In the original Lisp, there was a stack of variable bindings instead, leading to something that is now known as dynamic scoping.
That's true. Every modern Lisp (Scheme, Clojure, Racket) has lexical scoping. And Common LISP uses lexical by default.
> Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.
Again this contributes to the notion that LISP/Schema/Lambda Calculus were "discovered", not that Lambda calculus has an explicit pedigree.
> It's not clear to me why this makes Lisp not in the family of Lambda implementations.
To be clear, I started my comment by writing "if it is a realization, then it is one with [the following differences]." Lambda calculus was such a good idea that pretty much anything with function abstractions can be described by some variation of it. It's the dynamic scoping that causes the main issues here, though, and suggests lambda calculus was not a significant motivation in the definition of McCarthy's Lisp. Yet, he was still aware of it enough to call the abstraction operator "lambda."
>> Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.
> Again this contributes to the notion that LISP/Schema/Lambda Calculus were "discovered", not that Lambda calculus has an explicit pedigree.
I don't see how that follows. Sussman was a math undergrad and PhD and was well aware of developments in logic, and he influenced Steele, who created the quite-influential Scheme and went on be one of the main people on the standardization committee for Common Lisp. This isn't even mentioning all the work people have done in PL research with typed lambda calculi (going back to corrections to Church's attempt to use lambda calculus as a foundation for mathematics), which has influenced the designs of many type systems in modern programming languages.
That notion is wrong (at least with a very high likelihood), and it's usually stated by people who fetishize the lambda calculus but know little of its long evolution. It's just your ordinary case (of hubris) where people aesthetically drawn to something describe it as inevitable or even a law of nature. And I know it's wrong in part because of the following quote:
> We do not attach any character of uniqueness or absolute truth to any particular system of logic. The entities of formal logic are abstractions, invented because of their use in describing and systematizing facts of experience or observation, and their properties, determined in rough outline by this intended use, depend for their exact character on the arbitrary choice of the inventor.
This quote is by the American logician Alonzo Church (1903-1995) in his 1932 paper, A Set of Postulates for the Foundation of Logic, and it appears as an introduction to the invention Church first described in that paper: the (untyped) lambda calculus [1].
The simpler explanation, which has the added benefit of also being true, or at least supported by plentiful evidence, is that the lambda calculus was invented as a step in a long line of research, tradition and aesthetics, and so others exposed to it could have (and did) invent similar things.
If you're interested in the real history of the evolution of formal logic and computation (and algebra) you can find the above quote, and many others, in a 300-page anthology of (mostly) primary sources that I composed about a year and a half ago [2]. They describe the meticulous, intentional invention of various formalisms over the centuries, as well as aesthetic concerns that have led some to prefer one formalism over another.
[1]: Actually, in that paper, what would become the lambda calculus is presented as the proof calculus for a logic that was later proven unsound. The calculus itself was then extracted and used in Church's more famous 1936 paper, An Unsolvable Problem of Elementary Number Theory in an almost-successful attempt to describe the essence of computation. That feat was finally achieved by Turing a few months later.
Lambda calculus originated from research in formal logic, which is about manipulating symbols according to precise rules that would capture reasoning. It is a compelling way to combine variable binding, equality and substitution into a model of "function calls" - even if the purpose was to formalize arithmetic computation and reasoning.
At some level, reasoning is what programming is about as well! The notation and rules may change, but ultimately we want to make the machines do things and at some level, we need abstraction mechanisms. Recursive procedures are such a mechanism and it can be expressed as a lambda term that involves a fixed-point combinator, or machine code.
It is easy to model and understand many things using lambda calculus or functional programming techniques, depending on whether the interest is theoretical/formal or practical.
To quote Peter Landin (heavily influenced by McCarthy and LISP and author of 'The next 700 programming languages'):
> A possible first step in the research program is 1700 doctoral theses called "A Correspondence between x and Church's λ-notation."
Maybe people think this was different in the late 1950s?
Let's read McCarthy's paper 'Recursive Functions of Symbolic Expressions and their computation by machine Part I' where he explicitly cites Church and introduces lambda notation.
I would consider that paper part of the phenomenon that is LISP, would that not settle the question? Lambda calculus gives little guidance in terms of implementation, but I think it does not diminish LISP in any way that it should be "based" on lambda calculus.
And I do not find the linked article adds any value, but I am very glad to read the HN discussion to find gems like the above (even if I should rather have slept for the past few hours).
https://softwareengineering.stackexchange.com/questions/1076...
Turing's thesis talks about some system transforming an input to an output. Clearly, a TM could simulate the actor itself in your proof. If it is not able to simulate the entire actor-collaborator system, that's only because you may have given the collaborator (whatever it is that generates the messages) super-Turing powers. You assumed that there could be something that could issue a `stop` after an arbitrary number of `go`'s, but you haven't established that such a mechanism could actually exist, and that's where the super-Turing computation actually hides: in a collaborator whose existence you have not established. As you have not established the existence of the collaborator, you have not established the existence of your actor-collaborator system. I claim that a TM cannot simulate it simply because it cannot exist (not as you describe it, at least).
So here's another "proof": The actor machine takes two messages, Q and A(Bool), and it gets them alternately, always Q followed by A. Every time it gets a Q, it increments a counter (initialized to zero) by 1 to the value N, and emits a string corresponding to the Nth Turing machine. It then gets a message A containing a value telling it whether the Nth TM terminates on an empty tape, and in response it emits A's argument back. And here you have an actor machine that decides halting!
Ron Pressler - Finite of Sense and Infinite of Thought: A History of Computation, Logic and Algebra