First, the article claims: "the sudoku solver above does a brute force search", but that is specifically not the case. In fact, quite the opposite holds: The Prolog Sudoku formulation shown in the article uses a powerful technique known as Constraint Logic Programming over Finite Domains, which automatically performs pruning before and also during the search for solutions. This effectively eliminates large portions of the search space in practice, and degenerates to brute force only in those cases where almost nothing can be deduced from initial constraints. In the particular case of Sudoku, the pruning is especially effective and can in many cases eliminate the search entirely, since the initial constraints (hints) basically determine the whole solution.
Second, yes, it is easy to write an O(n!) search algorithm in Prolog. However, it is almost as easy to implement much more efficient search algorithms in Prolog. For example, here is Quicksort in Prolog:
quicksort([]) --> [].
quicksort([L|Ls]) -->
{ partition(Ls, L, Smalls, Bigs) },
quicksort(Smalls), [L], quicksort(Bigs).
Note how natural the declarative description of "first the smaller elements, then the pivot element, then the bigger elements" is in Prolog. This only requires a suitable implementation of partition/4, which is very easy to implement in at most 7 lines of Prolog code.https://news.ycombinator.com/item?id=13375108
See the "genuine sieve of eratosthenes" paper, and also:
http://augustss.blogspot.com/2007/08/quicksort-in-haskell-qu...
In Prolog, an arguably more natural sorting method is merge sort, and it can (also) be implemented with asymptotically optimal efficiency in Prolog as follows:
mergesort(Ls0, Ls) :-
length(Ls0, L),
zcompare(C, L, 1),
halving(C, L, Ls0, Ls).
halving(<, _, Ls, Ls).
halving(=, _, Ls, Ls).
halving(>, L, Ls0, Ls) :-
Half #= L // 2,
length(Lefts0, Half),
append(Lefts0, Rights0, Ls0),
mergesort(Lefts0, Lefts),
mergesort(Rights0, Rights),
merge(Lefts, Rights, Ls).
merge/3 is a built-in in several Prolog systems, and if your system does not provide it, you can easily implement it in less than 10 lines of Prolog code.Both quicksort and merge sort Prolog implementations are (even in their respective worst cases) vastly faster than the sorting algorithm given in the article (in its worst case, and also in its average case), and are at most marginally more complex to implement. For this reason, I think the article could be improved by simply showing one of these algorithms instead.
We cannot consider it a drawback of Prolog that computing all permutations takes O(n!) time in this language. It's the same for C, for example. If someone wants faster sorting, we can implement it also in a declarative language!
Somewhat strangely, the fact that you can easily write a Prolog program to generate all N! permutations of a list with N elements is more likely to be counted as a disadvantage than as an advantage, in my experience. If people looked at C and Prolog a bit more fairly in such overview documents, they could as well count it as a disadvantage of C that such a program is harder to write in C than in Prolog. Yet, I have never seen a C program implementing permutation sort in such articles, and never seen a comparable argument or suggestion about C as I routinely see it about Prolog.
Permutation sort is slow in C! Well, that's a well-known shortcoming of an imperative language such as C.
I think two of the OP's notes are covered in Mozart/Oz - concurrent programming using data floor variables (roughly equivalent to promises) and logic programming, including finite domain constraints.
For finite domain constraints, such guarantees are a major attraction in the Prolog community: If setting up the constraints terminates, then setting up the constraints plus search also terminates. That's a rather valuable and strong guarantee (since it is the search that can take take infeasibly long more often than setting up the constraints), and it breaks down if the search itself is more generally programmable.
Now, to your point, this is really the only way you can solve sudoku. There is an odd belief that you can make an algorithm that "never guesses." And folks think that that would be the definition of a non-brute force algorithm. In reality, you either have a solver that is ready to backtrack, or you have one that can't solve all puzzles.
Of course that does not hold in general, but in the special case of Sudokus and problem modelling by a domain-consistent all-different propagator, no search is required for a solution.
[1] A filtering algorithm for constraints of difference in CSPs, J-C. Régin, 1994
| ?- sudoku(X, Y).
This is called the most general query, since all arguments are fresh variables. Declaratively, we are asking Prolog: "Are there any solutions whatsoever?" In this case, the system answers with: X = [_#3(1..4),_#24(1..4),...etc.]
Y = [_#3(1..4),_#24(1..4),...etc.]
This shows that the Prolog program did not perform any search at all: No concrete value is instantiated, and the system does not ask for alternatives. That's right! No search whatsoever, and no backtracking at all, is performed in this program. No matter which definition of brute force search you are applying, this definitely does not fall into "search" at all!In the article, a more concrete query is also shown, as well as its solution. This solution is found via constraint propagation alone. This means that the system has deduced the unique solution purely by reasoning about the available constraints, which it does automatically every time a constraint is posted. This is one example of an algorithm that does not guess.
This also shows that even supposing you cannot make an algorithm that "never guesses", you can definitely make an algorithm that has to do much less guessing than searching over the whole tree. In Prolog, such algorithms are implemented and encapsulated in powerful combinatorial constraints like all_distinct/1, which are available in many Prolog systems and which you can use in your applications. Internally, the associated constraint propagators prune the search tree by applying various algorithms for you behind the scene. The ability to use such constraints and their strong pruning are major attractions of using Prolog for combinatorial tasks.
You are right that there may be cases where such propagation, albeit quite strong, may not suffice to fully solve a concrete combinatorial task. For this reason, you have to apply a concrete enumeration of remaining variables. In Constraint Logic Programming over Integers, this search is called labeling and provided by predicates like fd_labeling/2 or similar, depending on your Prolog system. The article does not use them though, and even if it did, the search could be guided by various heuristics by simply supplying a few options, which together with the pruning applied by constraints distinguish such a search from more uninformed search strategies.
Note also that the posted solution is quite hard to generalize elegantly. A shorter and equivalent Prolog program that describes 4x4 Sudoku puzzles is:
sudoku(Rows) :-
Rows = [Row1,Row2,Row3,Row4],
maplist(same_length(Rows), Rows),
blocks(Row1, Row2), blocks(Row3, Row4),
append(Rows, Vs),
fd_domain(Vs, 1, 4),
maplist(fd_all_different, Rows),
transpose(Rows, Cols),
maplist(fd_all_different, Cols).
blocks([], []).
blocks([A1,A2|As], [B1,B2|Bs]) :-
fd_all_different([A1,A2,B1,B2]),
blocks(As, Bs).
This assumes the availability of append/2 and transpose/2, which are likely already provided by your Prolog system, and easy to implement if they aren't.Isn't this why sudoku is still considered NP hard? (unless there has been a recent breakthrough).
Might try making a toy language out of it eventually.
# synchronous
sleep 1 ;
sleep 2 ;
# parallel
sleep 1 &
sleep 2 &
wait
wait
The results have to be files... in shell you think of the file system as your "variables", so everything is somewhat global.Would you build something like this from scratch or base it on another tech stack like the jvm?
Forth is known by name and reputation but very few people have ever tried to write anything in it - let alone delve into it's strange bootstrapped nature.
Incidentally, Aurora, mentioned in the article, evolved into Eve, which is inspired heavily by Datalog.
As someone who loves formal methods and believes most mainstream software systems today are mostly interactive and/or synchronous, I think this paradigm has a lot of untapped potential, and I'm glad to see it slowly move out of safety-critical systems into the mainstream, in languages like Eve.
[1]: https://en.wikipedia.org/wiki/Synchronous_programming_langua...
[2]: https://en.wikipedia.org/wiki/Esterel
[4]: http://witheve.com/
[5]: https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-...
Hardware is also concurrent by default. Coding some hardware logic will also change the way you approach coding. Anyone who's interested get an FPGA demo board and write some verilog or VHDL - I highly recommend it.
1. https://en.wikipedia.org/wiki/Hardware_description_language
Of course, always remember that "can be compiled" != "can be synthesized".
square = code(dup, op("*"))
Add a bit of syntactic sugar and you'll get "concatenative", or, better stack combinator-oriented language.
As for Prolog, I really appreciate that the author is calling it "declarative" not a "logic" language. It's more important to learn about backtracking and unification (powerful variant of pattern matching) than about something like Horn clauses. For anyone who wants to learn Prolog better (and it's worth it, Prolog is one of most beautiful PLs around!) I recommend this article: http://www.amzi.com/articles/prolog_under_the_hood.htm
Declarative is a vague term, but for me it means mostly a way to compactly describe your problem in domain-specific terms. There are some languages in which you can produce compact code, like APL or J. But declarative for me means readable too. And there are cases, when in Prolog one can write a more compact and readable code than in Haskell. In Prolog you can describe just the essence of the problem. Not always, of course.
Another interesting thing about Prolog that it's a small language. It means that it has only few internal parts that make it alive (The complexity of many Prolog implementations is a result of fighting for perfomance). I really like small languages (like Oberon or Forth, for example), because it's possible to learn how they work internally. And the knowledge of this inner working helps to understand the language better. There is nothing "logical" inside Prolog, just a few powerful imperative constructs.
The author of "Prolog Under the Hood An Honest Look" says:
"Prolog, billed as "logic programming", is not really. You may be disappointed if that's what you expected to find. On the other hand, having backtracking, unification, and recursion inside one computer language leads to something very powerful and special."
And I, personally, like Prolog terms very much!
[1] https://arxiv.org/abs/1501.00720 [2] https://www.researchgate.net/profile/Alexandr_Savinov
Disclaimer: I am the author of concept-oriented programming and data model
What I see similar in object-capability model and COP is that they both try to treat object access as a highly important part of a system behavior. In COP, I used to write that it is more important what happens during access rather than in the object itself. Hence, being able to make these intermediate actions integral part of the program (and develop good PL for that purpose) is very important.
https://youtu.be/VZQoAKJPbh8 very good talk about the background of eve. When he finally talks about eve you might want to switch to a more recent talk about it.
There's going to be a lot of really exciting stuff coming over the next few months. We've gathered a set of ideas, evidence, and implementations that have certainly blown us away - we hope others will find it valuable too.
EDIT: I realized I didn't address your initial question. Fwiw, we just recently crossed a really big milestone in terms of usage - more than 40,000 people have now played around with Eve on http://play.witheve.com and we've learned a ton from that experience. Part of the website revamp will be making that workflow simpler and nicer. We have a surprisingly high conversion rate (> 30%), so hopefully we can help smooth out that experience and begin to grow the community more and more.
[1]: http://witheve.com
Forth is a great concatenative language, since its the pioneer in that area (I think), but Factor is definitely worth mentioning too as a "modern" take on the paradigm. It essentially tries to be a concatenative Lisp.
ANI was dead even in 2014 when this article was written (which the author acknowledges: "the language seems dead, but the concepts are pretty interesting"). It has some really interesting ideas, but since it never got implemented, I'm not sure how much use there is in discussing it here amongst real languages. It would be useful as a discussion for possible future languages for sure, but its currently still just a concept, so I'm not sure what practical thing you can learn from it right now.
If every language has its own specific dark patterns and bottlenecks, LabVIEW's is definitely the "brightly-colored spaghetti" structural breakdown of an advanced beginner's code :-)
Incidentally, why do we, as programmers, tend to focus on a language's bottlenecks so much, in such an emotionally charged way? Any psychology-of-programming people out here? You might consider LabVIEW an excellent case study in getting on people's nerves...
ATS is aimed at system programming, and if you think Idris has a steep learning curve, you'll need belaying for ATS. And, the language is horrible. But it's really mind expanding to see it in action.
Dafny is a really basic language with support for Hoare/Dijkstra verification. It's completely unlike the type system model.
Stateflow or QP/QM, all other systems suck.
And what are the drawbacks? Why aren't everybody using it?
Drawbacks: it's very un-agile; you really have to think the system through completely. (The magic being that if you do this, it is very likely correct by design.) It's not really feasible to specify a part of the system now and leave other parts open for later refinement. The other drawback being that no good non-commercial options exist.
The section on "symbolic programming" has me pondering still about potential implications. It makes me imagine something like a "visual" WYSIWYG editor, but a "conceptual" editor.. Looking forward to digging deeper via provided references.
There are some production-ready schedulers for GPP, like Intel's TBB[1] (C++), but learning to be effective with this requires a major shift in thinking about code - essentially thinking in graphs.
[1] - https://www.threadingbuildingblocks.org/tutorial-intel-tbb-f...
[1] https://github.com/janusassetallocation/loman [2] http://loman.readthedocs.io/en/latest/user/intro.html
[1] - https://software.intel.com/en-us/blogs/2011/01/10/using-the-...
[2] - https://software.intel.com/en-us/blogs/2011/09/13/using-inte...
https://youtu.be/275FQ9koAw8?t=7647
Very interesting view, especially considering how long ago that was and how relevant they still are.
[1] They were also the creators of JGL, the Java Generics Library, which was like a Java version of the C++ STL, and done before Java got generics natively.
> Dependent types
>
> Example languages: Idris, Agda, Coq
>
> You’re probably used to type systems in languages like C and Java,
> where the compiler can check that a variable is an integer, list, or string.
> But what if your compiler could check that a variable is “a positive integer”,
> “a list of length 2”, or “a string that is a palindrome”?
This is what I love about SQL. You can even define your own types, like "email", at least in PostgreSQL: create table contacts (
name text not null,
age int check (age >= 0),
email email
);
It already has a few of these special types built in, like for IP and MAC addresses (https://www.postgresql.org/docs/current/static/datatype.html).Thus, a dependently typed database would allow one columns type to depend on the value of another column.
It would be like saying
Create table dep ( name text, fieldType int not null, fieldValue (if fieldType = 1 then int else varchar))
Notice the 'if' in the type of fieldValueTo help offset the irony and celebrate Turing completeness, here is a PostScript interpreter implemented in PostScript, and an explanation of why.
http://www.donhopkins.com/home/archive/cyber/litecyber/ps.ps...
http://www.donhopkins.com/home/archive/cyber/litecyber/ps.ps...
It reminds me of a statement I read about managing application state, to treat the state (in this case a Redux store) as an "in-memory database". Add a layer to load/persist automatically - via LocalStorage, WebSocket, etc. - and it would be persistent by default. I suppose you wouldn't want everything persistent though, just a relevant slice of state.
Here's an article about "persistent languages", which includes discussion on related features. http://wiki.c2.com/?PersistentLanguage
First, declarative programming is a generic name which includes a broad range of paradigms - from functional to logic programming. Logic programming is something that deserves a special mention and discussion, because there are a number of interesting and unique concepts that deserve a more in-depth explanation.
Second, "dependent types" is better understood as a feature of a language (or better yet, of a type system) than a paradigm by itself.
Some of the other "paradigms" also seem more like characteristics of languages, and not really something that structures the way solutions are expressed/understood.
Edit: to put a slightly finer point on it, this irreducible foundation mostly has to do with how the language "computes." Imperative languages compute via statements that modify program state. Functional languages compute via proof reduction to normal form. Logic languages compute via proof search.
https://esolangs.org/wiki/Rail https://esolangs.org/wiki/Billiard_ball_machine
[1] https://github.com/imatix/gsl [2] https://news.ycombinator.com/item?id=11558007
In Python, PyContracts supports runtime type-checking and value constraints/assertions (as @contract decorators, annotations, and docstrings).
https://andreacensi.github.io/contracts/
Unfortunately, there's yet no unifying syntax between PyContracts and the newer python type annotations which MyPy checks at compile-type.
https://github.com/python/typeshed
What does it mean for types to be "a first class member of" a programming language?
A parallel pair of AND gates is physically concurrent, let alone anything with a clock.
[1]: http://witheve.com
Robust-First Computing: Distributed City Generation https://www.youtube.com/watch?v=XkSXERxucPc
A Movable Feast Machine [1] is a "Robust First" asynchronous distributed fault tolerant cellular-automata-like computer architecture.
The video "Distributed City Generation" [2] demonstrates how you can program a set of Movable Feast Machine rules that build a self-healing city that fills all available space with urban sprawl, and even repairs itself after disasters!
The paper "Local Routing in a new Indefinitely Scalable Architecture" [2] by Trent Small explains how those rules work, how the city streets adaptively learn how to route the cars to nearby buildings they desire to find, and illustrated the advantages of "Robust First" computing:
Abstract: Local routing is a problem which most of us face on a daily basis as we move around the cities we live in. This study proposes several routing methods based on road signs in a procedurally generated city which does not assume knowledge of global city structure and shows its overall efficiency in a variety of dense city environments. We show that techniques such as Intersection-Canalization allow for this method to be feasible for routing information arbitrarily on an architecture with limited resources.
This talk "Robust-first computing: Demon Horde Sort" [4] by Dave Ackley describes an inherently robust sorting machine, like "sorting gas", implemented with the open source Movable Feast Machine simulator, available on github [5].
A Movable Feast Machine is similar in many ways to traditional cellular automata, except for a few important differences that are necessary for infinitely scalable, robust first computing.
First, the rules are applied to cells in random order, instead of all at once sequentially (which requires double buffering). Many rule application events may execute in parallel, as long as their "light cones" or cells visible to the executing rules do not overlap.
Second, the "light cone" of a rules, aka the "neighborhood" in cellular automata terms, is larger than typical cellular automata, so the rule can see other cells several steps away.
Third, the rules have write access to all of the cells in the light cone, not just the one in the center like cellular automata rules. So they can swap cells around to enable mobile machines, which is quite difficult in cellular automata rules like John von Neumann's classic 29 state CA. [6] [7]
Forth, diffusion is built in. A rule may move the particle to another empty cell, or swap it with another particle in a different cell. And most rules automatically move the particle into a randomly chosen adjacent cell, by default. So the particles behave like gas moving with brownian motion, unless biased by "smart" rules like Maxwell's Demon, like the "sorting gas" described in the Demon Hoard Sort video.
In this video "Robust-first computing: Announcing ULAM at ECAL 2015" [8], David Ackley explains why "Robust First" computing and computing architectures like Movable Feast Machines are so incredibly important for scaling up incredibly parallel hardware.
I think this is incredibly important stuff in the long term, because we've hit the wall with determinism, and the demos are so mind blowing and visually breathtaking, that I want to try programming some of my own Movable Feast Machine systems!
[1] http://movablefeastmachine.org/
[2] https://www.youtube.com/watch?v=XkSXERxucPc
[3] http://www.cs.unm.edu/~ackley/papers/paper_tsmall1_11_24.pdf
[4] https://www.youtube.com/watch?v=helScS3coAE
[5] https://github.com/DaveAckley/MFM
[6] https://en.wikipedia.org/wiki/Von_Neumann_cellular_automaton
[7] https://en.wikipedia.org/wiki/Von_Neumann_universal_construc...
The definition I like to use of concatenative language is:
If "X Y" is a legal program, then "X" is a list of tokens and a legal program and "Y" is a list of tokens and a legal program, and semantically, executing "X Y" is equivalent to executing "X" and then executing "Y".
practical implementations often deviate a little from this ideal.
Some of these languages are definitely suitable for "serious" usage. And I'm not sure I'd count SQL (or Prolog, for that matter) as esoteric at all!
Nonsense. The article's premise is explicitly that the listed paradigms will change how you think about coding. The author may or may not be correct about that, but the original title was not random hyperbole to entice a click - it was an accurate description of the article's premise and content.