For most problems:
There will be a ~80 line C solution.
There will be a ~120 line Java solution.
There will be a ~50 line Javascript solution.
There will be a ~40 line Visual Basic solution.
There will be a ~30 line PHP solution.
There will be a ~25 line raw Python solution.
There will be a ~20 line typical Lisp solution.
There will be a ~15 line Python solution using some obscure library.
There will be a ~10 line Lisp solution using some clever hack.
There will be a 1 line J solution.
(other languages omitted, and line estimates are pulled out of my ass)
Whether due to unfamiliarity or genuine opacity, some of the shorter solutions will be incomprehensible to some (most?) people. So while technically more expressive, what good is that if you can't read the solution, let alone write it?
But yeah -- typical line count proxies (inversely) for expressiveness.
But the difference in line count between Java + a bunch of libraries and Python + a bunch of libraries is a lot. And the J one-liners rarely involve libraries.
How many people can read advanced math formulas? Mostly none. So what good is it that they are used?
The answer is, it doesn't really matter if outsiders can't read it. Yes, it increases the hurdle to enter the field, but the benefits drastically outweight the initial learning in the long term.
Same is true (even more so, I would say) for programming languages.
Do they? I feel like a "citation needed" is in order.
There's a target for math formulas, and it's mathematicians. Those can read math formulas, even advanced ones. If outsiders (that is, laymen) can't read them, it's ok: they aren't supposed to be working with math most of the time.
The target for programming languages is programmers. But unlike math formulas, the outsiders that can't read some language J isn't laymen, but still programmers. The problem is not that non-programmer Joe Sixpack can't read J. The problem is that programmers can't read it, or find it unyieldly.
You could argue that, 'that's ok, the target for J is people who comprehend and are productive with something like J'. Sure, but that's precisely what we call out here. That some languages, even though expressive technically, put off not just outsiders from programming, but also programmers.
I definitely agree that "can outsiders, or even less experienced coders, understand it?" is orthogonal to "expressiveness." I went a little harder than I intended listing that as a drawback to expressiveness, although it's certainly a complementary feature: given two languages of equal expressiveness, the one that encapsulates that range in more comprehensible terms is "better" (there are many other factors to consider, hence the mitigation quotes).
I love the fact that in J:
mean =: +/%#
Meaning that you can express, from primitives, calculating the mean average with just four characters. And even with my very weak J, that's clear. But then I look at something I wrote over ten years ago: candidates =:,/((_4]\4#"3 (_1 + 2 * #:i.2^9) (* &. |:)" 1 _ list) * (>rowSigns))
and I have zero clue how to interpret that. It's almost certainly not idiomatic J, so it's possible (likely) that it looks like garbage to a J expert as well. Certainly J's terseness doesn't lend itself to commentation. :-)For one thing, there is no real way to use this definition to compare different languages, only the same language with one extra feature.
For another, even for individual features, it is too coarse. The very fact that it doesn't consider syntactic sugar expressive is wrong - most people would agree that a language with more syntax sugar is more expressive, since it allows you to avoid writing boilerplate. By this definition, list comprehensions and switch expressions for example don't add expressive power to a language.
Even if we ignore syntax sugar, the comments point out another counterintuitive property of this definiton: any language which includes a facility to convert expressions to ASTs is maximally expressive, since that facility can be used to distinguish any two expressions. This includes things like Lisp's `quote' special form or C's # macro, but also built-in code parsers. So, C++ for example added no expressive power to C by this definition, since C was already maximally expressive:
#define quote(x) #x
if (strcmp("3 + 3", quote(3 + 3)) {return 1;}Rather, it seems expressiveness as used in common parlance is closer to ease of comprehension. We like programs that are easy to "think of", that are "natural" to write in that they are succinct and "natural" to read in that they mimic what you are conceptually trying to accomplish. This has very little to do with programming language design and more the experience of the language user - aesthetics, really.
I'm sure you probably know this, but for non-C-programmers it's not obvious: strcmp() returns 0 when the strings are equal, <0 if the first string is lexicographically less than the second, and >1 otherwise.
Since the two strings handed to strcmp() in your snippet are equal, it will return 0 and the if will not execute.
But then you are perhaps back to square one, since the motivation for the question was to get away from the Turing Complete categorization which says nothing about whether it is practical to solve certains problems in a given language.
Shriram’s talk on the same is mentioned too—Shriram is a great guy and I’ve enjoyed all my interactions with him. He’s working on improving CS education. There’s a neat program called “Bootstrap” that he’s helped found that aims at teaching algebra with programming. The idea is that functional programming helps students grok important algebraic concepts. (Sorry if I slaughter the description.)
Anyway, I appreciate these people and their work. Cool stuff.
(Disclaimer: my advisor has had both Shriram and Matthias as advisors. I thought they were cool before starting my PhD though, so it’s genuine. :)
>> We can get closer to a formal statement by saying: a feature does not "add expressive power" if we can implement it as a macro that turns it into something in the original language
If adding features that could be implemented as macros doesn't make the language more "expressive", then the inverse should be true: Removing features that could be implemented as macros would not make a language less "expressive".
This seems to favor c++, in which basically anything imaginable can be done with macros, and any other language can be implemented. There's no global-level total thing that's missing which can't be cleverly built at an inline level. Yet all of c++'s granularity in memory management requires a huge amount of verbosity, which is to say it isn't as expressive (to me) as something that just runs garbage collection out of the box and makes assumptions about weak vs. strong references and lets you tinker with them if or when you like.
"Expressive" to me implies being able to get a lot of meaning across in a few choice words, whilst having a broad vocabulary to choose from.
Does adding .reduce() add no expressiveness to Javascript, just because it's easily implemented with a macro? I'd argue it adds a lot, because it adds a new color to the palette; it gives rise to styles of programming that allow one to distinguish intentionally between normal loops and functional recursion, and choose, i.e. express, the music according to their own intent.
Pfft. C++ macros can't even run a different compiler:
https://github.com/m-ou-se/nightly-crimes/blob/main/yolo-rus...
But a macro is just an integrated language that lets us emulate more expressive features. To me that's no different from implementing a more expressive language in a simpler one, so this claim seems equivalent to saying all Turing-complete languages have the same expressive power because you can emulate them in each other.
In a language where any program halts by construction, the compiler can actually check such properties.
The link mentions exceptions and operator overloading.
I won’t embarrass myself by speculating here :)
A sum type is basically an enum type you can associate arbitrary (but still strongly typed) data with (like associating two strings and an integer for one case, and nothing at all for the second case).
Pattern matching is basically like a switch statement on these sum types, destructuring the data and branching on the different cases. It comes with compile-time checking making sure you've covered all the cases if your business requirements change which is helpful too.
They are useful for implementing state machines which go from one discrete state to another. They've also been used to implement data structures like balanced binary trees, but I think that's a less "everyday" use for them.
I hope I explained them at least sort-of clearly. I think their functionality can be replicated in other ways (like defining an abstract Animal class and implementing Cat and Dog classes, instead of defining an Animal "enum" and a function that branches on each case), but this feels more natural and expressive to me personally.
I guess you could exploit memory layout, different number of variables or timing attacks to distinguish them, but that's cheating IMHO. By that same method you could distinguish anything that compiles to different machine code.
Find two non-trivial algorithms so different that their equivalence is difficult to establish but which will always produce the same results.
Artificial code obfuscation techniques are not allowed. The algorithms can’t just be differently obfuscated versions of each other.
I.e. both algorithms should be in their simplest form given their respective approaches.
Expressive power to me is something akin to "how easy is it to express a problem as close as possible to it's natural state in a PL".
Are neural networks more or less expressive than other forms of computation like register machines? What makes them so? And what can we do to make them more expressive while still being trainable?
If both languages are Turing-complete, you can always implement language X in Y, and Y in X. That's basically the definition of Turing-completeness.
Any feature can be implemented in the destination language, even if it is by something as silly as running the source language compiler and executing the resulting bytecode in a x86 VM in your destination language.
Every person i've met who uses turing equivalence as a real argument about PLs has been a deficiency clown
What is that?