Tcl occupies a very nice place in this regard: its homoiconicity and symmetry (and late binding) come from text. The outside world, to a very close approximation, is also made of text. Subprocesses, sockets, FFI, files and user interaction just feel more native - in the image-oriented languages, I always find myself fighting the ambassador who imperfectly represents these things in forms the kingdom understands.
Just a feeling. They're all wonderful languages, and this article speaks well to some of the "why".
You'd be surprised.
Joking aside, you seem to fixate on an implementation detail. It's just that the computing world, or rather the Unix on, is "made of text".
The world is actually made of objects and data, and this is closer to Lisp and Smalltalk.
I have been programming professionally in Common Lisp (off and on) since the 1980s but there is something equally magical about Smalltalk. I have often thought that Smalltalk could be the language I use after I retire (I am in my 60s and I will probably stop working in about ten years).
As a thought experiment, imagine Lisp without macros. It's not hard; after all, "The Little Schemer" covers metacircular interpretation without ever mentioning macros. So what's going on? Apparently we don't need macros! But, we could add macros to a Lisp by reifying them in the metacircular interpreter. There's actually a feature in plain sight which makes this possible, and it's the humble (quote) special form. This is what makes code and data intermix so cleanly in Lisp.
This is why languages like Julia and Monte are not shy about using "homoiconic" to describe their language design; a standard library compiler is just as good as a compiler in the core semantics, as long as it's easy to use and meshes well with the rest of the language.
Wikipedia has a nice entry on this. In short, "homoiconicity is where a program's source code is written as a basic data structure that the programming language knows how to access."
def x :Int := 42 # evaluated statement
def ast := m`def x :Int := 42` # quasi-quoted Monte fragment
eval(ast, safeScope) # easy evaluation
Now, it happens that m`` is a library written in Monte itself, but that's unsurprising when you consider how much of the Monte compiler is also self-hosting. Since Monte is a complex and rich language, the homoiconic representation is equally rich: def m`def @lhs := @rhs` := ast # pattern-matching!
[lhs, rhs] # [mpatt`x :Int`, m`42`]Early on in my Scheme career, I found the tools to create macros a bit confusing and arcane .. but I still knew I wanted macros.
I ended up writing code transformers - a poor man's macro system if you will, taking my "high level" foo.scm through a couple of translation layers that turned the abstractions I wanted into running code. It was literally:
$ scheme expand-foo.scm < myprog.scm > myprog1.scm
$ scheme expand-bar.scm < myprog1.scm > myprog2.scm
$ scheme myprog2.scm
What made this possible - trivial - was the homoiconicity: simply by (read)ing a program from stdin I had a list of lists that I could pattern match over and make the transformations I wanted. Exactly as my final program did with ordinary data.In some ways, this was a more satisfying approach than using define-syntax / syntax-case, which differ from the rest of Scheme in somewhat uncomfortable ways. That macros could never be first class eventually put me off, but that's another story :).
That's what Lisp systems do too. Program elements like classes, functions, methods, symbols, ... are first class objects. With something like CLOS you have a similar level of object-oriented meta-programming capabilities.
Many Lisp systems offer additionally to execute Lisp data using a Lisp interpreter and Lisp has a simple data representation for Lisp programs: Lisp data.
Smalltalk OTOH uses text as source code and usually a compiler to byte-code.
> because Lisp source code is expressed in the same form as running Lisp code
Only if you use a Lisp interpreter. Otherwise the running Lisp code might be machine code or some byte code.
> Smalltalk goes one further than Lisp: it’s not that Smalltalk’s source code has no syntax so much as Smalltalk has no source code.
That's a misconception. Smalltalk has source code. As text. It's just typically managed by the integrated development environment.
It's actually Lisp which goes further than Smalltalk, because Lisp has source as data and can use that in Lisp interpreters directly for execution.
That is not completely correct. It uses a mixture of text (strings) and objects. The class graph is composed of objects, but the method bodies are stored as objects and (optionally) strings.
To edit the class graph, it presents (parts of) it as text that you can edit (see ClassDescription>>definition in Squeak). E.g. to allow you to edit the Behavior class, it generates the following string and presents it in a text editor:
Behavior subclass: #ClassDescription
instanceVariableNames: 'instanceVariables organization'
classVariableNames: 'TraitImpl'
poolDictionaries: ''
category: 'Kernel-Classes'
Notice that this is a Smalltalk statement that can be evaluated. If you edit this strings and accept it, it will evaluate the code which updates the objects describing the class. The primary representation is not textual, but an object graph.A method is stored as byte code, and optionally as a string. The system will present you with a textual representation that you can edit, which is either the stored string or the decompiled byte code (which loses the original comments, indentation, and variable names). You can strip the textual representation of all methods to slim down the image (see SmalltalkImage>>abandonSources).
You can also file in/out a textual representation of classes and their methods. But that is not the primary representation of the code.
All changes to the class graph are also stored as changes in text. Every class has a textual representation. You can load an earlier image and replay this. This is basically like loading Lisp code into a Lisp image.
> it will evaluate the code which updates the objects describing the class.
This is like Lisp. The Lisp code manipulates the runtime class graph.
> But that is not the primary representation of the code.
The primary representation is text. That's what the IDE presents you when you edit the method.
I'm not sure this is true. Surely any programming language that lacks macros would be more powerful with them.
> Smalltalk doesn’t need macros because it has classes, powerful introspection capabilities, and simple expressive syntax (especially blocks) instead.
There's a debate to be had if compile-time macros are superior to passing blocks as arguments. It also is easy to make your language parser extensible or easy to modify without having traditional lisp macros. Metalua does something like this.
What makes both Lisp and Smalltalk interesting is that there's no difference between language and library; the constructs you create yourself are on equal footing with the ones most consider built in. Macros let you build special forms to control evaluation semantics, Smalltalk simply uses blocks [ ] to delay evaluation, and both languages are their libraries. Lisp has functions/macros, if/cond, etc; Smalltalk has objects used in a way you simply do not see in other object oriented langauges. Smalltalk has no if statement, no while statement, no reserved words beyond true, false, nil, self, super, and thisContext, everything else is library including all control flow constructs which are implemented with objects/classes/inhertance, and polymorphism.
They are both "pure" languages in a sense, and that pleases some people greatly. If you haven't programmed in Smalltalk, you really have no idea what object oriented actually means at a deep level. All of the popular so called OO languages are actually just procedural languages with hundreds of special keywords that have classes, but the languages themselves aren't build from classes and objects, they're procedural and defined by the compiler writer as special forms you cannot create yourself. Having objects, and being truly object oriented all the way down, are drastically different things.
That's not really true. Smalltalk is text-based, too, but hides it behind an integrated source management system. When you edit a method in a Smalltalk IDE, then you edit TEXT. The text then gets compiled to typically some byte code which gets interpreted by the Smalltalk virtual machine (which also might have some way to convert it to machine code).
If the text of the source code is not available, then Smalltalk needs to disassemble the byte code. But the disassembled byte code is not equal to the original source.
The sources are EXTERNALLY kept as text, outside the running system.
Just download your favorite Squeak and check out the contents. There is a huge sources file and there is a changes file. Those are text files with the sources and its changes.
This is actually different from some Lisp system, where the source actually is data inside the running Lisp and the Lisp interpreter runs this data. If you edit this code, Lisp then presents you a structure editor, which works on this data - not on text. It's not what a typical Lisp system does today, but it is still a possibility. Xerox' Interlisp used to use a structure editor for Lisp source code as data and a source code management system based on that.
This is different from Smalltalk, where the 'Interpreter' runs compiled byte-code and the byte code is generated from source code, which is actually text and stored outside the Smalltalk image. The Smalltalk image has then source code management data, like an index in each method which points to its external source.
Typical Lisp systems are doing the same. They record the source code location for functions and other things. If you edit the source for a method in a typical Smalltalk environment, it will retrieve the text for the method and in a text editor you can edit the text then. In a typical Lisp environment, the Lisp system will present you the whole text file and just jump to the definition using the editor...
You speak as if “macros” means “macros as implemented in the C preprocessor”. Lisp macros, as I understand it, operate on the parsed syntax tree, not on the file text level, and are expanded at runtime, not compile time.
What most of these articles seem to miss is that that Java's designers were themselves expert Lispers and Smalltalkers, and they most certainly realized all that, and that Java's success is a consequence of them understanding exactly why not to repeat the same design. Design doesn't live in a vacuum. Design is shaping a product not just to fit some platonic ideal, but reality, with all its annoying constraints.
To understand why Lispers and Smalltalkers designed Java the way they did, I recommend watching James Gosling's talk, How The JVM Spec Came To Be[1], and the first 20 minutes or so of Brian Goetz's talk, Java: Past, Present, and Future[2].
[1]: https://www.infoq.com/presentations/gosling-jvm-lang-summit-...
> Design doesn't live in a vacuum.
Java was designed as a modernized/slim replacement for C++ when developing set-top boxes and PDAs. What SUN took from Lisp and Smalltalk in some limited form was the runtime: managed runtime with GC, code loading, typed objects and a virtual machine. VMs were thought as an advantage on machines with little memory, because of compact code representations. Various Lisps and also Smalltalk had that. But that was mostly it. The language level wasn't influenced by Lisp at all: no Lisp syntax, no Evaluator, no lambdas, no code-as-data, no macros, no support for functional programming, ...
Here is a quote of his:
"And you're right: we were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp. Aren't you happy?"
Regarding Java's "success" (which falls in the same category as PHP's success, Python's success, Javascript's "success" and so on) I urge you to consider it as a classic example of "Worse is Better".
Lisp (and Smalltalk and Erlang and Forth and ..) do not have mass-market appeal because they do not easily hand out a feeling of immediate rewards, that a lot of newbie programmers find so attractive. They require more upfront investment from the user before they unveil their secrets, before one "gets it".
How does that make what I wrote completely wrong? Good design is a compromise. Gosling presented Java's design as a wolf in sheep's clothing. They figured that the features most important in Lisp and Smalltalk are memory safety, GC, dynamic linking and reflection, shoved all of them into the JVM, and wrapped them in a non-threatening language that could actually gain significant traction. That's what good design looks like.
> Regarding Java's "success" (which falls in the same category as PHP's success, Python's success, Javascript's "success" and so on) I urge you to consider it as a classic example of "Worse is Better".
Eh. Unlike PHP (and maybe Javascript and Python, too), more useful good software has been written in Java than in any other language in the history of computing, with the possible exception of C. I don't know by what metric -- other than personal aesthetic preference -- you'd consider it "worse" (or, conversely, what your metric for success is). Remember that Java was designed to be a conservative language for industry use. In his article outlining Java's design[1], Gosling writes: "Java is a blue collar language. It’s not PhD thesis material but a language for a job. Java feels very familiar to many different programmers because I had a very strong tendency to prefer things that had been used a lot over things that just sounded like a good idea." I think it is funny to doubt Java's success considering its stated mission, goals and non-goals. Smalltalk also tried to become a commercially successful language. I think it is equally funny not to see it as a failure in that regard, which was certainly among its goals. The extensive work done on Smalltalk (Self, really) at Sun and elsewhere was quickly absorbed by Java, and so Smalltalk has certainly achieved success in enabling Java.
[1]: http://www.win.tue.nl/~evink/education/avp/pdf/feel-of-java....
This plays havoc with your ability to do static analysis, and languages that hinder static analysis should not be used in real-world systems. If the earliest you find out about errors is in a running system, it's far too late and you are hosed.
This is why the Lisp and Smalltalk Evangelism Strikeforces have been met with decades of failure, while the Rust Evanglism Strikeforce is getting on with a massive project of digital tikkun olam.
If that were true, then Javascript, Python, Ruby, PHP, etc would also have failed. Smalltalk and Lisp failed to become popular in the modern world for reasons having nothing to do with late binding.
How about wait until Rust is at least as widely used as Ruby before going on about how much of a failure Smalltalk and Lisp are. Let's see if Rust stays around as long as Smalltalk & Lisp have, or whether it has that kind of influence on other languages.
https://pointersgonewild.com/2015/09/24/basic-block-versioni...
Some highly dynamic language features make analysis really imprecise or really hard (in terms of computation cost). There has been quite a lot of work on making static analyses that can handle such language features (for example control flow analysis helps analyzing code that uses dynamic dispatch or closures a lot but cost of the analysis is exponential in terms of the precision level most of the time). Sometimes people tackle analyzing highly dynamic languages like JavaScript but at a huge time cost in certain cases [1]. I'd prefer using a language designed with static analysis in mind if I were to prove certain properties about my code.
[1]: http://www.cs.ucsb.edu/~benh/research/papers/dewey15parallel...
That's a feature, not a bug. Late binding rocks.
> This plays havoc with your ability to do static analysis, and languages that hinder static analysis should not be used in real-world systems.
The real world is full of late bound languages; much of the internet runs off late bound languages including this site. There's a million Rails and Python apps out there, so basically this "opinion" of yours is not bound to reality.
All of biology is late-bound, cells communicate via message passing, so the oldest and most complex real world systems we know of are late bound. To dismiss late binding is naive at best.
So much that you can see ripples everywhere of late binding languages being slowly (not saying the transition is complete) replaced by static languages. Even the one true bastion of late binders, web development, is seeing massively increasing adoptions of languages like Typescript on the frontend, and languages like Go on the backend (see adoption at Youtube, Dropbox and so on).
Outside of web development, and simple trivial admin scripts, the other major source of late bound software was.. Apple Objective C. Which is getting replaced by Swift, a language that heavily favors static typing and functional paradigms.
> There's a million Rails and Python apps out there, so basically this "opinion" of yours is not bound to reality.
There's a million of trivial CRUD apps that don't do much of worth and whose death the world would not really mind either. DHH, rails's author, didn't mind restarting his basecamp servers 400 times a day because of a memory leak. These people are not software engineers. They're cave men using glue other people made to tie together rocks to build stonewalls. Which will then fall as soon as the weather stops being nice. Security, reliability, performance, what do they know about any of these things? But hey, you can do cute things like 3.days.from_now, what a great framework!