Certainly, people manage to make poorly performing things even so, but at least at the base level, your primitives may be stupid, but they are generally fast.
The logic programming world works in a default space of O(n) operations, that stack together more freely than the imperative world, and that gives easy access to O(2^n). Since this is essentially impossible, a great deal of work is done to try to get that down, but you're always intrinsically starting behind the eight-ball. It is easier to work up from O(1) operations than to write an exponential or super-exponential algorithm and then try to trim it back down to a decent complexity for a normal programmer.
I think this is the root cause as to why logic programming is generally not something we see a lot of. It's like a wild stallion; it may be powerful but the amount of effort you pour into just keeping it under control may exceed any benefit you could get.
It isn't useless, of course. Amazing work has been done in the field of SAT solvers, and there's certainly a niche for it. The problems that are intrinsically higher-order polynomials or (technically) exponential, well, they are what they are and if you're going to be stuck in that world, logic programming may offer you a much better toolset than conventional programming on its own. But there was a hope a long time ago, in the Prolog era, that it could become part of the normal toolkit of general purpose programming, and I don't think that will ever happen, because of this line of logic.
This is a bit tangential to the article, it's just what I happened to read that finally crystallized this in my mind.
It had numerous setbacks. The Japanese thought they could parallelize Prolog programs for their Fifth Generation Computing Project in the 1980s but found out quickly that you couldn't. They made a language called KL1 which was parallelizable, but it wasn't as good as Prolog in other respects.
The ability to implement simple parsers in the Prolog using its evaluation process impressed me a lot when I didn't know much about parsers, now that I know how to implement parsers it doesn't impress me much.
You can write mixed logical/imperative problems in Prolog but boy is it awkward.
When I got interested in RDF circa 2010 I was interested in Datalog (a pure logical language) but found nobody else seemed interested in it and it was hard to find literature on it. That situation has changed a lot as Datalog is a pretty clear way to make database query code composable.
Production rules engines (say Drools) are another "old A.I." technology that is largely forgotten even though they are heavily used in banks and a few other corners of the business world. These implement “forward chaining” inference distinct from the “backward chaining” inference implemented in Prolog. Between RETE engines and effective indexing structures it is easy to handle 1000x's more rules in your knowledge base in the 1980s but production rules never got standardized like C, FORTRAN or COBOL and no really general answers have come up for questions like controlling the order of execution when that matters.
One of the lessons I'm still trying to absorb is how convinced we all were (and many still are) that there must be a ton of implicit parallelism in the world, but once we went looking for it, it turned out there was hardly any.
I've been chewing on this for years, trying to figure out whether this is something true about the world, true about the problems we try to solve, or because of the pervasive use of the imperative paradigm which is highly ordered and makes it very easy to impose ordering constraints. Now, clearly, the latter is a non-trivial component... but I still struggle with whether it is a full answer, because when people sat down with clean sheets of paper and tried to solve problems with extensive parallelism, they've largely failed to get more than low single-digit speedup factors. Non-imperative paradigms like logic programming have been tried, especially in these projects.
It isn't exactly news that our intuitive beliefs about our code and the real characteristics it has is quite out of whack. This underlies the pervasive advice to just go grab a profiler whenever you're trying to accelerate some code because even experts in the field with decades of experience are routinely completely wrong about what is slow in some bit of code. This is one particular aspect I'm still struggling with. It still feels like there should be so much more parallelism available....
Exhaustively searching an exponential space with no heuristics for reducing the search space is obviously gonna be way easier in a familiar language than in a new language.
The one use case I do see for prolog and similar tools in the modern day is for Constraint Satisfaction programming. Obviously, exhaustively searching an exponential solution space won’t be better for that, but typically you can use concepts like Edge and Arc consistency (which are core to CSP solvers) to greatly reduce the actual search space. Implementing those algorithms from scratch in an imperative language is annoyingly hard, so reaching for optimized, generalized implementations can be a good choice.
For something like a Sudoku solver, edge/arc consistency can greatly improve compute times. If you’re good at sudoku, you are already implementing these techniques when solving puzzles, just like good chess players implement minimax search without having to be told what minimax search is.
https://www.sciencedirect.com/topics/computer-science/arc-co...
Working on open source projects with other Prolog enthusiasts is a bit different because that crowd self-selects for being good with Prolog.
The problem with logic programming is the 'logic' part.
Modern imperative and functional programming languages are constructive.
Logic programming is not, and the expressive power varies with the exact logic being used.
Elementary logic gives you the following intuition. Propositional logic -- easy. First order logic -- easy (with caveats). And (some kinds of) second order logic -- only easy if you get lucky with the problem you are trying to solve.
For logic based programming languages and systems, both the language implementer and the programmer have to be careful about supporting and using language constructs which boil down to tractable computation.
This is much more difficult than it seems like.
For example, when using SMT solvers you learn quickly that multiplication and division with constants is very fast while the same operations with variables can be intractable. i.e. reasoning about x * C --easy, while x * y is often (but not always..) going to hang.
Having these tools close at hand the same way we have regex and (more recently) PEGs for grammars definitely makes it more likely that people will reach for them when appropriate, and exposure and familiarity to the practices can grow.
I suspect part of the problem is just that minikanren is primarily a learning tool and doesn't prioritize practical ergonomics. I haven't used core.logic but I've used minikanren in this embedded way and it's difficult to do cleanly in ways that have nothing to do with logic programming per se. I may be off but I think it's something that having a high quality professional strength implementation in the stdlib could actually help with.
Huh? This isn't remotely true especially today, and for the type of programming most do, where people work with functions and classes from libraries, with each line doing all kinds of O(?) stuff under the hood, and applying those to collections of objects, and so on.
If that's what core.logic is, then core.logic is not logic programming.
Logic programming refers to one of two things: either Prolog-like languages where programs are sets of definite clauses executed by SLD-Refutation of a goal, and evaluation returns all bindings of the variables in the goal that make the goal true; or Answer Set Programming (ASP), where programs are similar to definite programs, but the semantics are stable model semantics and evaluation finds all the stable models of a program.
Both of these approaches incorporate search, but neither is "just brute force search", in any way, shape or form.
For example, see the solution to the Zebra Puzzle here: https://www.metalevel.at/prolog/puzzles which uses CLPZ[^1].
zebra(Houses) :-
houses(Houses),
member(house(red, english, _, _, _), Houses),
member(house(_, spanish, dog, _, _), Houses),
member(house(green, _, _, coffee, _), Houses),
member(house(_, ukrainian, _, tea, _), Houses),
right_of(house(green,_,_,_,_),
house(ivory,_,_,_,_), Houses),
member(house(_, _, snails, _, winstons), Houses),
member(house(yellow, _, _, _, kools), Houses),
Houses = [_, _, house(_, _, _, milk, _), _,_],
Houses = [house(_, norwegian, _, _, _)|_],
next_to(house(_,_,_,_,chesterfields),
house(_,_,fox,_,_), Houses),
next_to(house(_,_,_,_,kools),
house(_,_,horse,_,_), Houses),
member(house(_, _, _, orange_juice, lucky_strikes), Houses),
member(house(_, japanese, _, _, parliaments), Houses),
next_to(house(_,norwegian,_,_,_),
house(blue,_,_,_,_), Houses),
member(house(_, _, zebra, _, _), Houses),
member(house(_, _, _, water, _), Houses).
houses([
house(_, _, _, _, _),
house(_, _, _, _, _),
house(_, _, _, _, _),
house(_, _, _, _, _),
house(_, _, _, _, _)]).
right_of(A, B, [B, A | _]).
right_of(A, B, [_ | Y]) :- right_of(A, B, Y).
next_to(A, B, [A, B | _]).
next_to(A, B, [B, A | _]).
next_to(A, B, [_ | Y]) :- next_to(A, B, Y).
member(X, [X|_]).
member(X, [_|Y]) :- member(X, Y).
?- zebra(Houses)
To check it out yourself, copy/paste this into https://quantumprolog.sgml.io/browser-demo/browser-demo.html and execute on your browser. solution(Pairs, Water, Zebra, Vs) :-
Table = [Houses,Nations,Drinks,Smokes,Animals],
Houses = [Red,Green,Yellow,Blue,Ivory],
Nations = [England,Spain,Ukraine,Norway,Japan],
Names = [england,spain,ukraine,norway,japan],
Drinks = [Coffee,Milk,OrangeJuice,Tea,Water],
Smokes = [OldGold,Kools,Chesterfield,LuckyStrike,Parliaments],
Animals = [Dog,Snails,Horse,Fox,Zebra],
pairs_keys_values(Pairs, Nations, Names),
maplist(all_distinct, Table),
append(Table, Vs),
Vs ins 1..5,
England #= Red, % hint 1
Spain #= Dog, % hint 2
Coffee #= Green, % hint 3
Ukraine #= Tea, % hint 4
Green #= Ivory + 1, % hint 5
OldGold #= Snails, % hint 6
Kools #= Yellow, % hint 7
Milk #= 3, % hint 8
Norway #= 1, % hint 9
next_to(Chesterfield, Fox), % hint 10
next_to(Kools, Horse), % hint 11
LuckyStrike #= OrangeJuice, % hint 12
Japan #= Parliaments, % hint 13
next_to(Norway, Blue). % hint 14
next_to(H, N) :- abs(H-N) #= 1.
It's subjective, but the above is far clearer about the rules to me. Compare `next_to` in both: next_to(Kools, Horse)
next_to(house(_,_,_,_,kools),
house(_,_,horse,_,_), Houses)
The former is much more straightforward as a representation of the hint. You don't need to know which entry in a `house` corresponds to cigarette brand and animal, you can just directly compare the two and say Kools is next to Horse.:P
You can override/customize Prolog's default search strategy as desired, Prolog being Turing-complete. Prolog syntax and resolution in this case just provides a straightforward starting point; which is much needed as you explore the complexities and challenges of your problem domain to kick-off a project (you know, as opposed to prematurely optimizing a program for transforming you DSL into a SAT/SMT formulation). I think Prolog works very well for this.
Note Prolog also has libraries for offloading to SAT solvers and eg. z3 supports a Datalog subset as an alternative to SMTLIB/Lisp-like syntax.
Prolog is similar (even more so) an example of a high-level declarative language with an optimizing runtime, but (I guess) isn't as optimized. Prolog runtime could incorporate a SAT or SMT solver internally, without changind the language, right?
Code:
j24([X],R,H) :- !, X =:= R,H=X.
j24(L,R,H) :- select(X1,L,L1), x2(X1,Op,X2,R),
j24(L1,X2,H1),H =..[Op,X1,H1],tj24(Op,X1,H1).
tj24(Op,X1,H1) :- member(Op,['+','*']),
H1 =..[Op,Y1,_],integer(X1),integer(Y1),!,X1 =< Y1.
tj24(_,_,_) :- true.
x2(X1,'+',X2,R) :- X2 is R - X1, X1 =< X2.
x2(X1,'-',X2,R) :- X2 is X1 - R.
x2(X1,'*',X2,R) :- X1 =\= 0, X2 is R / X1, X1 =< X2.
x2(X1,'/',X2,R) :- R =\= 0, X2 is X1/R.
Example:
?- j24([2,5,7,11,20],280,L).
L = 5+11*(7+(20-2)) ;
L = 5+11*(7-(2-20)) ;
L = 5+11*(20-(2-7)) ;
L = 5*(20+2*(7+11)) ;
L = 7-(2-11*(5+20)) and so on.
A little more complex:
?- j24([2,3,4,7,11,13,17,23,27,31],7239,Solution).
Solution = 2+(3+(11+31*(7-(17-27/(4/(13+23))))))
When L = [2,3,4,7,11,13,17,23,27,31,37,43,47] and Result is 1117239 one solution is
2+(3+(4+(27+(47+23*(43+13*(37+7*(11*(17+31))))))))
Edited: I made a small change to prune solutions: x2(X1,'+',X2,R) :- X2 is R - X1, X1 =< X2.
x2(X1,'-',X2,R) :- X2 is X1 - R,X2 > 0.
x2(X1,'\*',X2,R) :- X1 =\= 0, X2 is R / X1, X1 =< X2.
x2(X1,'/',X2,R) :- integer(R),R =\= 0, 0 is X1 mod R, X2 is X1/R.
The program does not compute all the solution. When the parameter Result is relatively small is easier to obtain a problem with many solution, so the algorithm is fast.The Prolog program computes a solution in one or two seconds, how would be a similar python program to solve this problem in less than 10 lines?
Once you have a way of encoding the clues into a machine-readable syntax, it’s trivial to solve. But, the tricky part is creating a good puzzle that can be solved by a human without guessing. That’s an art form. A brute-force solver won’t suffice to analyze your randomly generated sets of clues; instead, you need a solver that is only capable of making the types of deductions a human could make. Very subjective.
If you have a suitably interested child, this could provide a fun bridge to constraint based programming.
This is just not correct (or at least phrased VERY poorly). My understanding is that core.logic uses Minikanren under the hood which is guaranteed to find an answer if an answer exists regardless of goal ordering. Unlike Prolog, Minikanren (and presumably core.logic) do not search depth first (which is why Prolog can run into infinite loops very easily). Instead, it roughly does interleaving BFS. It can search inefficiently, but that's true of any combinatorial method.
As for practical use cases, I really like how easy it is to write analysis code for parse trees and type systems. The C# T-SQL parser (used in various tools) is very tedious to actually use for quick jobs because of how many different node types it has. Instead of handling that monstrosity, I wrote a much smaller app that just went over the whole tree and converted it to JSON (including the types and type hierarchy) via reflection. It was then way easier to load it into Prolog and query the tree.
List comprehensions support a bundle of features including producing multiple values, and filtering them by testing and potentially failing.
Logic programming is that plus some more flexibility (more compositional forms of multiple value production) and some new features (such as using unification to solve for unknowns).
We should expect mainstream languages to evolve to support more logic features, since they provide a more general and more compositional way to do many things. Pattern matching expressions in most languages are a limited special case of logic programming that would benefit from using the more general case. Same with casting constructs, null propagation operators, boolean and/or/not, and the internal compiler logic supporting features like Haskell typeclasses and Rust traits. If we can replace lots of special-case language features a few general constructs, programming will be simpler as Niklaus Wirth advocates for.
It feels more like its based around describing a graph, where some verticies have side effects, and then releasing DFS on the graph. I'd call it DFS programming, but also im a noob at it, so maybe more experienced people would disagree.