It's interesting that they've called this paradigm Articulate Programming, because articulation of the domain is where the problem both starts and ends.
How many times have you worked in a company with staff who start off exasperated with how complex IT makes solving a business problem, only to be surprised at just how many details are in their day to day processes once you've spent time covering off all the edge cases and writing tests around exceptions.
Code becomes complicated because the domain it models is complicated. Hence the reason why a good engineer's most important skill is in gaining an understanding of the real world problem domain, and expressing that as code. And also why I'm not worried of AI taking my job any time soon.
Also, they don't do natural language processing but allow you to write method names like "the square-root of _x^2 + _x + _" where the underscores are arguments.
Thus their "efficient" compiler that they parallelize the parser to find an unambiguous parse.
I don't know. This seems like it would be super fun to write and to play with, and that there are probably some really cool new things being discovered that, when matured and properly integrated, may be workable into a usable programming language.
I came across this years ago when trying to get to grips with the then new fangled Ruby language. I kept having to go back to the documentation to remember the best way to convert a string to all uppercase... was it:
str.upper
str.uppercase
str.ucase
str.upcase
str.capitalize (<- Don't even get me started on the regional differences of 'ise' vs 'ize' between US and UK English variants)
???
Even here, I would probably start using Avail, then in a few weeks I would be scratching my head and asking, was it: Print 1 to 10, as a comma-separated list
or Display 1-10, in CSV formatAny programming language that tries to do "natural language" should at least reference the AppleScript HOPL Paper[1] and say how they are doing things differently to address the (now) obvious problems.
(Oh, and there is a LOT of good stuff in the paper, definitely worth a read).
1. Why would you want to remember this? It's in the docs. If you use it often, you'll learn it eventually. If you don't, you have no need to remember this. Don't burden your memory unnecessarily.
2. A well written and searchable documentation adds very little overhead anyway, on the order of a few seconds. Yeah, writing from memory is faster, but not by that much, and that advantage disappears almost completely if you have good auto-completion and docs support in your editor.
3. There are hundreds of languages out there, even if you learn the library of one language, it doesn't help with other languages at all. Learning to quickly search reference docs, along with learning some basic concepts/assumptions of each language, is much more sustainable if you're thinking of becoming polyglot.
EDIT: please, don't turn HN into Reddit, write a comment if you disagree, instead of downvoting.
However, as a starting point for me, it would be easier if the languages all used a common naming convention for things like .upper() etc. Common mistake for me is to try .upper(), then .uppercase(), then have to leave my code and refer to the docs after repeated runtime errors. If I can make a reasonably intelligent guess within 2 or 3 tries, then it bodes well for me. Otherwise it is a productivity hit.
To extrapolate this - the corollary to .upcase() is, to my mind .lowercase(), but it is in fact .downcase(). Makes sense logically ('down' is the opposite of 'up'), but syntax wise, I never say to anyone "You need to write your username in down case...". If the naming conventions for functions followed the English language definitions, then my hit/miss ration will improve, and so will my productivity.
NB: I didn't downvote you (I can't anyway as I don't have the necessary karma to downvote immediate child answers to my posts). I thought you raised a valid point worthy of discussion.
But look at the FAQ: http://www.availlang.org/about-avail/documentation/faq.html
Commenter dcw303 in this thread puts it better than I can.
I will refute that notion and suggest that 'plain English' type languages are an incredibly foolish diversion into a world where we pretend formal languages don't exist, are unimportant, or that English is one, or that it somehow can be laboriously contorted into a useful approximation of one.
Wouldn't you have the same issue with just any programming language?
Take a more esoteric example - to capitalise just the first letter of a string. By definition, this is called 'proper case', and most other languages I know use .proper() to achieve this. Except Ruby, which decided to use .titleize() (and there is that 'ize' again just to confound me further).
If a language is going to be 'English-y', then sticking to actual English words for things such as uppercase, proper case etc. would be handy. Going outside of those constraints just increases the guessing game workload for the programmer.
This suggests they have a solution to the ambiguity problem
Millions of dollars are wasted each year on mistyped ==/= operators, but this is some next level evil.
Not impressed tbh.
Module "Hello World"
Uses
"Avail"
Extends
"Avail" =
(
"keyword lexer"
)
Entries "Greet"
Body
Method "Greet" is [ Print: "Hello, world!\n"; ];I'm pasting a link because I added on-hover popups explaining parts of the snippet, but here is the sample itself:
Module "Hello World"
Uses
"Avail"
Entries
"Greet Router"
Body
Method "Greet Router" is [
socket ::= a client socket;
target ::= a socket address from <192, 168, 1, 1> and 80;
http_request ::= "GET / HTTP/1.1\n\n"→code points;
Connect socket to target;
Write http_request to socket;
resp_bytes ::= read at most 440 bytes from socket;
Print: "Router says: " ++ resp_bytes→string ++ "\n";
];[0]: http://www.availlang.org/about-avail/documentation/faq.html#...
http://www.availlang.org/index.html
I tried to link to a more interesting page though. I stumbled upon someone mentioning the language, but don't really get it. It seems to promote building DSLs, but lots of languages like Lisp & Rebol have been doing that for ages.
I understand your intent, but I think it unfortunately ended up confusing matters for fellow readers.
Add to this the fact that the website itself is not very clear already...
Ok, I consider myself OK at type theory but I'm still lost in what this claim actually means. And if it is what I think it is (that all values have types), I wonder how this doesn't run afoul of decidability of fancy dependent type systems (perhaps 1 has a type, 2 has a type, but 1 + 2's type isn't 3?).
Formal languages, at root, have exact reference. In a programming language, a symbol ultimately refers to a block of memory, or an operation. The problems of writing a formal language are ones of trying to express a given concept when the relation between symbols and references is known, but the relationship between concept and symbol is not.
In natural language, a symbol ultimately refers to nothing. Its meaning is derived from context, convention, intention. As such, the relationship between concept and symbol is basically known - we know we are talking about red things when we use the word red. The relationship between concept and reference is absolutely unknown - we can never know for sure whether our concept 'red' is adequate to real red objects.
As such, natural languages are a poor model for formal ones. The problems are essentially different. In one, you know how the symbol 'red' relates to operations and memory. In another, you know how the symbol 'red' relates to intention and meaning. Each has different challenges associated.
For the former, we have the whole notion of semantics, the development of tools like valgrind, tests, etc.
Is there anything you can reccomend to read? I'm pretty familiar with how computers work on a mechanical level, but I'm pretty ignorant about the theoretical intuitions behind all the more functional stuff.
In e.g. Scala, you can do that:
print( 1 to 10 mkString "," )
It's not 100% human language/grammar, but close (and you have auto-completion using IDE). Why would you need another DSL?
(Not trying to bash Avail, nor promote Scala, just curious for its usecases)
print (intercalate ", " [1..10])
I don't buy natural-language-ish programming languages. The grammar becomes far too complicated very quickly. A simple but flexible grammar, a la most functional languages, is superior.
To be honest, I'm not sure how much clearer this is to read than for example Python.
http://www.availlang.org/_examples/guess-the-number/Guess%20...
Edit: (I just googled "algebraic type lattice" and while ymmv, I don't recommend it unless you're well versed in scary black mathic)
I didn't get too in depth with reading the docs, but any language that goes for non ascii symbols a la APL is going to be fighting an uphill battle right from the get go.
Maybe it was a bit easier even for apl because there were interfaces more immersive than what we have now for non ascii, especially when mixed with regular ascii.
Type type type, oh wait, backslash, dropdown, there's my symbol, enter, type type type. That's not very fun. That's less fun when youre dividing your cognition between what things im actually trying to accomplish and what things I have to type.
Just my two cents, no ill will
Languages like Java (and many more) fail to even have a top type, which leads to kludgey add-ons to the type system — boxed, @Nullable, erased generics, annotations, dependency injection, purity annotations, etc., all of which should have been part of a single type system.
I first encountered them when doing work-related reading on dataflow analysis, and I'm very glad I did. Interesting stuff.
(edit) Summary: the topic might look scary at first blush, but it's actually not.
o
/\
o o
/\ /\
o o o
/\ /\ /\
o o o o
/\ /\ /\ /\
o o o o o
... ... ...I like some well-structured separation in my coding languages. It's not a downside for me at all.
From 2015: https://news.ycombinator.com/item?id=9043561
https://osmosianplainenglishprogramming.blog/
And we're about to release Español Llano, the Spanish version of Plain English. This new compiler compiles Spanish and English. Or both, even in the same sentence. Kind of like a bi-lingual human.
No, I would not. Don't make assumptions on behalf of others.
But even then, I'm not convinced that "many" programmers would rather write the latter.
Cheesy, closed languages like C forgot that exponentiation was even a thing, or complex numbers. If you shift over to using C++’s “clevernesses”, you still don’t get exponentiation, because the traditional caret symbol is already used for exclusive-or, which has no sensible ASCII punctuation.
As for someone’s distant comment that Lisp has been used to create languages for years... sure, if the language you wanted was parenthesized lists with keywords inside the left parenthesis. Which is just Lisp with a few more operations and macros. Yuck.
Rather than trying to get programming languages to look like human language, we need to get human language closer to computer language.
By this I mean that every argument I've ever been in has turned out to either be an intrinsic disagreement about definitions (fixable, and usually we agree) or an intrinsic argument about god (probably not fixable, we will probably not agree).
If the average person understood the beauty of a solid (and unambiguous) definition, I dunno, world peace and rainbows and butterflies? Probably not, but I'd definitely not have to rage-quit socializing so often.
Still, with that said, from a purely intellectual curiosity standpoint this is neat. I hope that the general saltiness of the internet doesn't discourage the devs from working on this some more.