> Pattern matching and polymorphism is not allowed
Using dynamic dispatch on different functions of type Int -> Int is, precisely, polymorphism. Doesn't look much like the OO polymorphism that some people are used to, but FP folks use this kind of functional polymorphism all the time.
That being said, the main premise of the article is quite valuable: yes, you can program with no conditionals. And i think it's quite useful to know that polymorphism can be used like that without suffering too much of an unfriendly syntax (see lmm's comment on Smalltalk: http://news.ycombinator.com/item?id=5198419).
Finally, although pattern matching can be seen as an overpowered if-statement, the relation can be also turned around: the if-statement is a special case of pattern matching. I won't say that pattern matching is not conditional statements, but i think it's also quite valuable to notice that you can use pattern matching as a new programming primitive instead of the if-statement. For example, the if-statement implemented in Haskell:
if :: Bool -> a -> a -> a
if True x _ = x
if False _ y = y
Relying only on pattern matching instead of conditional if-statements can make quite the difference on how one reasons about programming problems (e.g. Prolog and declarative programming in general).Where I disagree with the author is in pointing out that there is no real difference between declaring "fib(1) -> 1" in a pattern matching function definition and inserting a pointer to a function that returns 1 into an array of function pointers. Pattern matching and subtype polymorphism can even still get all the gains the author is looking for--namely, being able to debug branching by inspecting the call stack. For that matter, so would pushing a dummy stack frame every time you come to a branch, or even building a separate branch stack.
My goal was to show that you can do conditional statements and polymorphism without having those constructs by implementing them on top of other constructs (and possibly vice versa). And by looking at them from a different angle, you may end up with some ideas which may or may not be of value, such as the debug-thing. Building a separate branch stack would be the same thing, the question at hand is how you find such an idea/solution to a problem.
Sorry, but "n < 2" is a conditional, whether you're using an if statement, the "?" operator, or to reference an array of function pointers (at least from this old-school C/assembler programmer's perspective). Whereas that Erlang code looks elegantly conditional-free. Clearly branching will appear behind the scenes, but the Erlang code has no conditionals.
Whether that's actually faster is debatable though. If you're writing in C, you're probably concerned about performance, and the function pointer invocation is going to be a lot slower than a branch mispredict. A sufficiently complex array dereference will slow it down too, but I wouldn't be surprised if the compiler can turn simple cases like this into an equivalent switch/jumptable.
Thus conditional-free programming (as presented here, in C) is mostly academic. Debug tracing is an interesting application, but readability and performance both suffer.
If a language could internalize conditional-free constructs natively, however, it could be an interesting study.
set A, 0
ifg B, 2
set A, 1
It conditionally evaluates a machine instruction, which is, indeed, branching.Do you think this is true because C code has traditionally relied so heavily on branching?
Put another way, perhaps branch-free programming is has greater speed potential, but the tools we use have been heavily optimized for branch-laden programming. Maybe putting sufficient resources into optimizing code for branch-free programming (and stripping out optimizations for branching) would yield faster execution.
I don't know the answer, just musing. I am curious, though.
int res = (*fn[(n < 2)])(n);
As stated by others, "n < 2" is a conditional or comparison or whatever (let's just say it's cheating :P) let's call it "m".So we choose our function pointer with:
int res = (*fn[m])(n);
where uint32_t m = n < 2?1:0;
and let's assume n is non-negative... uint32_t top_bits = n & 0xFFFFFFFEu;
uint32_t p = 0;
for (int i = 0; i < 32; i++){
p |= (top_bits & (1u << i)) >> i;
}
uint32_t m = 1 - p;
Edit:You can do the for loop to check if any bits are set in logarithmic time. For general comparisons see http://stackoverflow.com/questions/10096599/bitwise-operatio...
Oh and the "i < 32" doesn't count as you could just unroll it...
if(a > b) {...code...}
you do something like (a > b).ifTrue({...code...})
I think it's an approach more languages should use - it means there are fewer special cases in the syntax, and you can compose conditionals or use them in higher-order functions more easily.https://en.wikipedia.org/wiki/SKI_combinator_calculus#Boolea...
#include <stdio.h>
#include <stdint.h>
void prettyprint(int a, int b)
{
int lt_mask = (a-b) >> (sizeof(int) * 8 - 1);
intptr_t lt = (intptr_t)"%d is < %d\n";
intptr_t ge = (intptr_t)"%d is >= %d\n";
char *text = (char*) (lt & lt_mask) + (ge & ~lt_mask);
printf(text, a, b);
}
int main(int argc, char **argv)
{
prettyprint(atoi(argv[1]), atoi(argv[2]));
return 0;
}
The point is that the CPU doesn't have to guess what to execute next, so https://en.wikipedia.org/wiki/Branch_misprediction cannot happen.And if what you're interested in is using the call stack to keep track of executed code, a more effective approach would be to convert the program to CPS, see https://en.wikipedia.org/wiki/Continuation-passing_style and http://matt.might.net/articles/cps-conversion/ .
This is perhaps more useful in lisps or Haskell but it's still cool to think about. And, hell, maybe someone implementing a language in C could do something like this for the debugging properties.
And while yes you could take this to the extreme and implement everything on top of SKI combinators, perhaps one wants to start a touch higher up than that. Anyway, maybe that's a useful answer?
I guess I was just puzzled as to why the author felt that avoiding conditionals was a useful exercise in the first place. I realise that I'm probably wedded to a conditional-heavy way of thinking, I guess what threw me was that the author seemed to assume that eliminating conditionals stands as an aim on its own, regardless of minimising use of language features. At the same time he has a test in his array de-reference that seems to me to be effectively a conditional without using the explicit if(){} else{} construction.
TBH I would have thought verifying the implicit conditional and function-pointer jump was no easier than the explicit...
Let True = (λ x y => x) Let False = (λ x y => y)
Then to do an if, you just apply the conditional ((λ x y => x) 1 2) ---> 1 ((λ x y => y) 1 2) ---> 2
It's funny, really. Pure lambda calculus can feel a lot like a object oriented message passing style.
No, they are not. Having called functions instead of evaluating conditional statements doesn't change anything to the state of the [non-stale part of the] stack at the time you reach the breakpoint.
To achieve this one would need to execute the code following the branching (and containing the breakpoint) as callback of any of the "conditional" functions, so the followed branch would indeed appear in the stack. But that would just be impractical as you would reach stack overflow extremely quickly - not to mention the hit on performances.
def prettyprint(a, b):
fmt_string = {True:'%d is less than %d',False:'%d is higher than %d'}[a < b]
return fmt_string % (a, b)
Or even (and this is cheating by using the int constructor): def prettyprint(a, b):
fmt_string = '%d ' + ['is higher than', 'is less than'][int(a < b)] ' %d'
return fmt_string % (a, b)
I think a good way to get acquainted with FP is to replace conditionals with lookup tables and lambda functions. But it is easy to overdo it, sometimes plain old if-statements are the most readable alternative. >>> (1 is True) or (True is 1)
False
>>> (0 is False) or (False is 0)
False
>>> isinstance(0, bool) or isinstance(1, bool)
False
>>> isinstance(False, bool) and isinstance(True, bool)
True
Indexing with boolean expressions is bad style to begin with, but if you are going to do it then a bool cast shows explicitly what you are trying to do.It uses recursion, while a simple loop will suffice. Your fib(10000000000) will blow your RAM limit with growing stack (and will be slow) while implementation with loop would consume zero RAM and would run several times faster.
"And as if by magic, there's no need for conditionals" : also incorrect. You have a conditional in int res = (fn[(n < 2)])(n); specifically the (n < 2) part. All you have done after the hard work is moving conditional from one line to another.
"But we're still not sure of which branch we took." - So why don't you put the breakpoint before* the condition occurs, and step through it in the debugger??
They're coverage tools rather than debuggers, but they can give you an html-browseable version of your source code coloured like that to show you which lines have been hit and how many times.
After reading it, there's a chance I will use some of the insight I've gained in my Conception project.