The straw man in the post - talking about a case-sensitive matcher that selectively called one of two different functions based on a boolean - is indeed trivially converted into calling a single function passed as an argument, but it's hard to say that it's an improvement. Now the knowledge of how the comparison is done is inlined at every call point, and if you want to change the mechanism of comparison (perhaps introduce locale sensitive comparison), you need to change a lot more code.
That's one of the downsides of over-abstraction and over-generalization: instead of a tool, a library gives you a box of kit components and you have to assemble the tool yourself. Sure, it might be more flexible, but sometimes you want just the tool, without needing to understand how it's put together. And a good tool for a single purpose is usually surprisingly better than a multi-tool gizmo. If you have a lot of need for different tools that have similar substructure, then compromises make more sense.
This is just another case of the tradeoff between abstraction and concreteness, and as usual, context, taste and the experience of the maintainers (i.e. go with what other people are most likely to be familiar with) matters more than any absolute dictum.
It seems like every time someone writes an article on how to write better code, there are responses about how it doesn't make sense when taken to some logical extreme, or some special case, as if that invalidates the argument. (FP techniques in particular seem to provoke this.) But code design is like other design disciplines-- good techniques aren't always absolutes.
Do you really think that because the given example doesn't apply to every situation it's a 'straw man'? It is a little tiring to hear all code design advice dismissed this way.
To anyone out there who clicked through to these comments and is thinking it's not worth reading the article, please go ahead and read it. It's short enough. You may or may not use fewer if-statements in the future, but it might give you a better sense of why you choose to do things one way over another.
I've seen junior devs take this kind of stuff literally and over-apply it, like it's a religious ritual that they get a pious buzz from adhering to. I'd prefer people to think first before regurgitating what they most recently learned.
I notice this form of dismissal in virtually all internet arguments. It's like most people aren't aware of the difference between a strong argument and a sound argument.
This is trivially true, any datatype can be encoded as a function. The post is not saying that we can pass any type of lambda whatsoever, but that we should pass lambdas that implement the required functionality.
> The straw man in the post - talking about a case-sensitive matcher that selectively called one of two different functions based on a boolean - is indeed trivially converted into calling a single function passed as an argument, but it's hard to say that it's an improvement. Now the knowledge of how the comparison is done is inlined at every call point
If call sites shouldn't choose wich lambda (or boolean) to pass, simply define a new function that always passes the same lambda to the original function, and use it everywhere. (This could also be a good case for partial application.)
To elaborate: this is called the church encoding of the data type. Particularly interesting for recursive data types.
The most common example is probably 'foldr' (or 'reduce' in Lisp-parlance) for linked lists.
...and a framework is likely to give you a box of components to build a tool-making factory factory factory...
Let's take the following function invocation, which can be expressed with Boolean literals or Church encoded booleans, I don't care:
match true false
If you want to determine the significance of the boolean values passed to this function, it does not suffice to go to the definition of 'true' or the definition of 'false'.Now take something like this:
match caseInsensitive contains
Even though I have used descriptive names here, it's almost beside the point; I could just have easily have used nonsense names: match foobar quux
If you want to know what 'foobar' means, you can go to its definition, and see how it preprocesses a string and a pattern. You don't have to guess about the meaning of a bit.As a result, the semantics of 'match' and its parameters are all communicated more clearly, with less room for error, and much more generality.
There need not be any syntactic overhead: it is merely the replacement of some flag with a lambda which cleanly encapsulates the effect that would otherwise be encoded in the flag. The way you invoke the function is the same, but instead of twiddling bits to get what you want, you pass functions whose meaning does not require (as much) subjective and possibly error-prone interpretation.
Note this also objectively simplifies the functions themselves, because they formerly contained conditional logic, but once you rip that out and give them no choice (invert the control!), they have less room to err, which makes them easier to get right, easier to maintain, and easier to test.
There is also another way to view the issue: with booleans, we first encode our intentions into a data structure (at the caller site), and then we decode the data structure into intentions (at the callee site).
Well, why are we packing and unpacking our intentions into data structures? Why not just pass them through?
Indeed, we do that by pulling out the code and propagating it to the caller site (possibly with names so you don't need significantly different syntax and can benefit from reuse). Then our code more directly reflects our intentions, because we're not serializing them into and out of bits.
I think the general principle applies to more than booleans, but it's easiest to see with booleans.
match caseInsensitive contains
it takes a bit of thought to match the regex-like concept of "Case insensitive match flag" to "case insensitivity can be achieved by a transformation of the pattern and target so that case doesn't matter". Perhaps the right way to relieve this burden is to provide some simple functions that can be used for the common cases (caseInsensitive, caseSensitive) and a sensible default (caseSensitive). ctx.arc(10, 20, 30, 0, 6.28, false);
There's nothing special about boolean. How do you encode all of those above into types in fp so that it's impossible to get them wrong and so they're self documenting? I hope you're not suggesting there be a horizontalFloat type and a verticalFloat type or are you?I couldn't agree more, and this is why I think most FP programs are about as intellectual stimulating as `std::min_element`
Basically, if you structure the control flow in object oriented style (or church encoding...) then its easy to extend your program with new "classes" but if you want to add a new methods then you must go back and rewrite all your classes. On the other hand, if you use if-statements (or switch or pattern matching ...) then its hard to add new "classes" but its very easy to add new "methods".
I'm a bit disappointed that this isn't totally common knowledge by now. I think its because until recently pattern matching and algebraic data types (a more robust alternative to switch statements) were a niche functional programming feature and because "expression problem" is not a very catchy name.
and because "expression problem" is not a very catchy name.
It's also not particularly descriptive either, but the page mentions that it's a form of "cross-cutting concern", to which the table-oriented approach basically says "do not explicitly separate the concerns."
(More discussion and an article on that approach here: https://news.ycombinator.com/item?id=9406815 )
As a bit of a fun fact, doing table-oriented stuff in C is one of the few actual uses for a triple-indirection. :-)
> I think its because until recently pattern matching and
> algebraic data types (a more robust alternative to
> switch statements) [...]
Could you elaborate a bit on what this accomplishes, eg. pattern matching vs a "case" statement? As I've programmed in Haskell for the past year or two, I've observed exactly this change in my style of writing - that I've started to get rid of "case" statements inside function definitions, and have moved them into the pattern-matching part instead ("outside" the function definition).But I have to admit, I'm not entirely sure why I do this. It just feels more robust to me in some way.
The case-expression vs function-definition difference you mentioned from Haskell is just syntactic sugar. In both situations you are doing exactly the same pattern matching under the hood.
* [at least some] compilers will check for exhaustiveness
* "Pattern matching isn't just conditional matching. It's also binding, and even some common operations. "
What kind of work has there been on creating programming paradigms that make it easy to both add new types and new methods? Is it a CAP-theorem-type problem where every solution is a trade-off, or is there a way to have your cake and eat it too?
That said, there is a complexity and readability trade-off (that is hard to quantify) because these more flexible programming patterns that can solve the expression problem are more complicated than plain method dispatching or switch statements.
Programs that are full of function indirection aren't necessarily easier to understand than ones which are full of boolean conditions and if.
The call graph is harder to trace. What does this call? Oh, it calls something passed in as an argument. Now you have to know what calls here if you want to know what is called from here.
A few days ago, there was this HN submission: https://news.ycombinator.com/item?id=12092107 "The Power of Ten – Rules for Developing Safety Critical Code"
One of the rules is: no function pointers. Rationale: Function pointers, similarly, can seriously restrict the types of checks that can be performed by static analyzers and should only be used if there is a strong justification for their use, and ideally alternate means are provided to assist tool-based checkers determine flow of control and function call hierarchies. For instance, if function pointers are used, it can become impossible for a tool to prove absence of recursion, so alternate guarantees would have to be provided to make up for this loss in analytical capabilities.
Some type systems are strong enough to put that kind of analysis / constraints directly into the language. (Haskell might already be strong enough with GADTs and other language extensions enabled.)
In any case, the Addendum at the end of the blog post provide a different perspective on the problem you mentioned.
If I were in charge of developing a safety critical system, and someone came to me with a proposal to write it in Haskell, I'd be very skeptical.
If you're going to do this sort of thing with much success, you really need to have a language with a fairly powerful type system. If function pointers are your only option for higher-order programming, I wouldn't even try. First class functions or interface polymorphism help, but I'd also want to have a language that makes it relatively easy to create (and enforce) types so that your extension points don't end up being overly generic.
Here's the deal: if is a flow control primitive. Just like goto and while. If (heh) that primitive isn't high-level enough to handle the problem you are facing, it is incumbent upon you as a programmer to use another, higher level construct. That construct may be pattern matching, it may be polymorphism (or any other form of type-based dynamic dispatch). It may be a function that wraps a complex chain of repeated logic, and is handed lambdas to execute based upon the result. It may, as in the article given here, be a funtion that is handed lambdas which apply or do not apply the transformation described.
The point is, there are many branch constructs, or features that can be used as branch constructs, in most modern programming languages. Use the one that fits your situation. And if that situation isn't a that complex, that construct may be if.
Fizzbuzz using guards is the most clean and modifiable fizzbuzz that I've seen in Haskell.
Although now that I think about it, if you provide a function with a list of numbers...
Eg Haskell and Scheme get by without 'while' and 'goto'.
Haskell would do just fine without a built-in 'if': you can define 'if' as a function via pattern matching.
Given that perspective, the article would be a call to use more expressive types than Booleans to match on---and in lots of cases not to match at all, but provide what would be the result of the match as an argument to the function.
But yes, using more expressive match types or parameters is a good idea. As for providing the result as an argument, that can be a good pattern, but isn't always practical. Note what I said in my original comment about using your own discretion.
"Bad IFs" are a code smell, and they're being scapegoated when the real problems are management demanding that simple hackish prototypes & tests be deployed into production, management that doesn't allow time for refactoring, and poor programmers who think that "bad IFs" are good code.
But the main site also doesn't do any reasonable job of defining what a "Bad IF" even is.
The crux of the matter is that programmers need time to craft the details of a project to avoid or correct technical debt. These sort of reactions just point out one tiny portion of technical debt itself and doesn't solve any fundamental problems at all.
(and yeah, I known I'm ranting against the Anti-IF campaign, not the particular take on the linked site. But this article just seems to parameterize the exact same parameters that are branched on anyway.)
That said, most places I've worked manage it poorly. Few people really understand that, just like financial debt, it's something that needs to be taken on and managed in a mindful and deliberate manner.
argv.nth(1)
.ok_or("Please give at least one argument".to_owned())
.and_then(|arg| arg.parse::<i32>().map_err(|err| err.to_string()))
.map(|n| 2 * n)
I'm waiting for date.if_weekday(|arg| ...)
Reading this kind of thing is hard. All those subexpressions are nameless, and usually comment-less. This isn't pure functional programming, either; those expressions can have side effects.This has not "taken over" rust. Result is another type that does this, but this makes sense for the same reasons.
That has nothing to do with primitive control flow nor is that an indication of if_weekday appearing anytime soon.
That being said having primitive control flow implemented as methods also has precedent with languages like Smalltalk or Self. That may be unusual but I don't think that's necessarily bad. I would be interested in reading about why this is bad design though.
This article mentions 'if' and 'Boolean'. Loops and lists are another example. (And for the same reason that most languages make such extensive use of loops, Haskell programs can often have a lot of lists.)
listOf(1, 2, 3, 4).filter { it % 2 == 0 } .map_err(ToString::to_string)
as well. Works just fine with methods.Why add a feature for this - worse syntax, if you can just use an anonymous function?
Rust is already not the simplest of languages, adding further syntax and features of questionable benefit won't make the language any simpler or easier to understand.
With "?" and "try!()", Rust is sort of emulating exceptions in a weird way.
There is a place for it - like when you're trying to express a set of logic that will be guarded by the same condition, but always at the cost of some complexity.
A set of conditionals is probably the most obvious way to express branching.
That's what I recommend too[0] - with the added caveat that you shouldn't be afraid to "knock it" if it turns out to be honestly bad.
Sometimes the idea turns out bad, sometimes it turns out great - but you'll never know it if you don't try; just be honest with yourself during that trial.
Besides object polymorphism and sets of conditionals, there's also generalized predicate dispatch, but that's probably an overkill for many things.
An excerpt:
> The problem is computing the bit in the first place. Having done so, you have blinded yourself by reducing the information you have at hand to a bit, and then trying to recover that information later by remembering the provenance of that bit.
The destroy all IF reminds me of GOTO considered harmful of the 70's. There are other ways to fix the problem.
Multiple return values feel elegant also because the compiler can optimize them away when you're not using them, which is the most common case.
(Apart from that languages with multiple return values tend to have some special syntax for binding only the first few members of the returned tuple?)
The more I write code the more I realize that the entire purpose of the code is to have some effect on reality, and the more reliably it can do this, the better the code. I find I code a lot better without design principles, because trying to remember which patterns are "good" and "bad" just obscures the attention I would have used to look at the code and sense whether something would work in this particular situation.
Very nicely put. The only "principles" I keep in mind when I write code are simplicity, correctness, and efficiency, and those tend to all be correlated.
/snark, but articles like this really do fall into that trap...
Perhaps. But there's the effect you get from running the code on a computer. And the effect reading the code has on humans.
My gut thinks the solutions will be a little more boring than our inner magpies will want to admit.
The one the whole team understands and can agree upon.
(Though the biggest impact of the refactoring was to remove two home-grown abstractions and a whole bunch of ad hoc transformations and replace them with the appropriate use of the very powerful, and well-understood Applicative.)
I like using functional as much as anyone, and removing branching often does make the code clearer and remove the potential for mistakes.
But I admit I have a hard time with suggesting people prefer a lambda to an IF, or to not ever use an IF. A lambda is, both complexity wise, and performance wise, much heavier than an IF. And isn't is just as bad to abstract conditionals before any abstractions are actually called for?
I have a similar problem, in that every time I try to understand the perspective of functional-programming advocates, I find that the authors always seem to illustrate their points with examples like this:
match :: String -> Boolean -> Boolean -> String -> Bool
match pattern ignoreCase globalMatch target = ...
If I'm already literate in Haskell or Clojure or Brainfuck or whatever godawful language that is, then chances are, I'm already familiar with the strengths of the functional approach, and I'm consequently not part of the audience that the author is supposedly trying to reach.So: are there any good pages or articles that argue for for functional programming where the examples can be followed by a traditional C/C++ programmer, or by someone who otherwise hasn't already drunk the functional Kool-Aid?
Really understanding where FP is coming from requires an introduction to programming language semantics[1]. Interesting stuff, but not immediately useful to a working C programmer.
[0] https://existentialtype.wordpress.com/2011/03/15/boolean-bli...
bool match(char *pattern, bool ignoreCase, bool globalMatch, char *target) { ...How many layers above that you want to hide that fact is entirely dependant on you and the requirements of solving there problem.
I have a problem with people assuming 'their' way is there only way, and generally being oblivious about the vast variety of problems the rest of us encounter.
Hi John,
Are you familiar with Jackson Structured Programming?
https://en.wikipedia.org/wiki/Jackson_structured_programming
Notice how the focus in on using control flows that are derived from the structure of the data being processed and the processed data. Notice how the JSP derived solution in the Wikipedia example lack if-statements.
Pattern matching allows ones to map control flow to the structure of data. What are your thoughts on that? I think inversion of control has other benefits but I don't think it has much to do with elimination of `if` conditionals, the pattern matching does that.
Also, I noticed one thing:
In the article you mention `doX :: State -> IO ()` as being called for its value and suggest that if you ignore the value the function call has no effect. Isn't it the case that a function of that type usually denotes that one is calling the function for its effect and not for any return value? Its value is just an unevaluated `IO ()`.
For instance, at some point there will be a decision made whether the string matching must be case sensitive or not. If the program can do both at runtime, the IF will be, perhaps, in the main (or equiv.).
Good writing has one clear imperative: communicate meaningfully the intent of the author to the reader. Good code is no different; it is merely expressive writing in a different language, with, perhaps, greater constraint on its intent.
Some people make up rules like "don't use adverbs", or "don't split infinitives", in an effort to write better. But this doesn't necessarily produce good writing; sometimes an adverb is just what you need.
The same is true of code. These are useful things to think about, but "destroy all ifs" is akin to "never use a conjunction".
I realize this is one of those irritating "actually," replies, but what can I say, I'm sensitive about this topic. =)
Article: "In functional programming, the use of lambdas allows us to propagate not merely a serialized version of our intentions, but our actual intentions!"
Counterpoint: The use of structured objects instead of black box lambdas allows us to do more than just evaluate them. For example, Redux gets a lot of power by separating JSON-like action objects from the reducer that carries out the action.
But let's take instead the article's example of case-insensitive string matching. One tricky case is that normalization can change the length of the string: we might want the german "ß" to match "SS". Sure, the lambda approach can handle that. But now suppose that we want a new function that gives the location of the first match. It should support the same case-sensitivity options (because why not?). But now there is no way to get the pre-normalization location, because we encoded our normalization as a black box function. Case-by-case code would have handled this easily.
Second: The enum based refractor is actually valuable and fine IMO. If you need string functions, stop there.
Now, shipping control flow as a library is a cool feature of Haskell. But, if those arguments are turned into functions, the match function itself isn't needed! It just applies the first argument to arguments 3 and 4, then passes them to the second argument.
match :: (a -> b) -> (b -> b -> Bool) -> a -> b match case sub needle haystack = sub (case needle) (case haystack)
Does that even need to be a function? Perhaps. But if so, it's typed in a and b and functions thereof, and no longer a "string" function at all. And, honestly, why are you writing that function?
Typing it out where you need it is typically less mental impact, because I don't need to worry about the implementation of a fifth symbol named "match."
sub (case needle) (case haystack)
So, yes, replacing booleans with a callback is sometimes a good idea. But in other situations, replacing a callback with a simple booleans might also be a good idea.
Also, advice like this is often language-specific. In languages whose functions support named parameters, boolean flags are easy to use and easy to read. If you only have positional parameters, it's more error-prone, so you might want to pass arguments using enums or inside a struct instead.
In OOP, you encapsulate data into objects and then pass those around. The data themselves are invisible, they only have interface of methods that you can apply on them. So methods receive data as package on which they can call methods.
In FP, in contrast, the data are naked. But instead of sending them out to functions and getting them back, the reference frame is sort of changed; now the data stays at the function but what is passed around is the type of processing (another functions) you want to do with them.
For example, when doing sort; in OOP, we encapsulate the sortable things into objects that have compare interface, and let the sort method act on those objects. So at the time sort method is called, the data are prepared to be compared. In FP, the sort function takes both comparison function as an argument, together with the data of proper type; thus you can also look at it as that the generic sort function gets passed back into the caller. In other words, in FP, the data types are the interfaces.
So it is somewhat dual, like a different reference frame in physics.
The FP approach reminds me of Unix pipes, which are very composable. It stands on the principle that the data are the interface surface (inputs and outputs from small programs are well defined, or rather easy to understand), and these naked data are operated on by different functions (Unix commands). (Also the duality is kind of similar to MapReduce idea, to pass around functions on data in the distributed system rather than data itself, which probably explains why MapReduce is so amenable to FP rather than OOP.)
It also seems to me that utilizing this "inversion of control" one could convert any OOP pattern into FP pattern - just instead of passing objects, pass the function (method which takes the object as an argument) in the opposite direction.
I am not 100% convinced that FP approach is superior to OOP, but there are two reasons why it could be:
1. The "nakedness" of the data in FP approach makes composition much easier. In OOP, data are deliberately hidden from plain sight, which destroys some opportunities.
2. In OOP, what often happens is that you have methods that do nothing rather than pass the data around (encapsulate them differently). In FP approach, this would become very easy to spot, because the function that is passed in the other direction would be identity. So in FP, it's trivial to cut through those layers.
type Case = String -> String
-- ...
type Announcer = String -> IO String
I would argue that these are actually much worse than not having type synonyms at all.(String -> String) functions could do anything to your query parameter and text, the type is too coarse, and the inhabitants too opaque for us to reason about them easily. Naming the type suggests the problem is solved without actually having solved it. It is like finding a hole in the ground, and covering it with leaves, so you don't have to look at it anymore. You are literally making a trap for the next person to come this way.
In an ideal world you would be able to use refinements to say that you want any (f :: String -> String) such that `toUpper . f = toUpper` but without such facilities, I think I may just settle for:
newtype Case = CaseSensitive Bool
Sometimes, your type really does only have two inhabitants. data Case = CaseSensitive | CaseInsensative
This is just as efficient as the newtype, and leads to clearer code when matching on the value.Also, sometimes types you thought only had two inhabitants get a third one added later, which this facilitates.
CaseSensitive
CaseInsensitive
Is harder to spot (for me) than between: CaseSensitive True
CaseSensitive False
This is because the bit that is the same is all on one side, and the bit that is difference is all on the other side. Case in point, your data definition has a typo: `CaseInsensative`, which occurs after the `In` shifts it away from the bit it should be the same as in `CaseSensitive`. Every little bit helps.What's more, while you may be right that at the surface, the two representations are equally performant, what the newtype has that the data declaration does not, is the Prelude's definitions of all the boolean operators. If you wish to perform any more complicated logic with your data declarations treating them as booleans, you must either cast them to booleans (which comes at a runtime cost), or you must replicate the functionality of the Prelude for your custom type (which comes at a development cost).
Your branching logic (which, let us suspend disbelief and say is "not so bad", just for now) may require the combination of multiple such booleans, which in your encoding scheme would each get a different type due to their semantics, then we can't even viably define our custom boolean operators, so are forced to cast everything to booleans.
The point I'm making here is that outwardly, you want the type to reflect the semantics of how its values are used, but inwardly, you want access to its representation in a way that makes it easy to combine (or put another way, depending on who's looking, the semantics of a value changes).
Also, there is nothing stopping you from changing code later to meet changing needs. Using a newtype now doesn't preclude you from ever using a data declaration in the future. Certainly, you will have to change the patterns and constructors used in a couple of places, but that is a matter of minutes: Time you have already spent weighing the future implications of this decision in your mind right now, so this sensation of time saved is a fallacy.
It’s no wonder that conditionals (and with them, booleans) are so widely despised!
They are?Even standard algorithms like quicksort[1] use conditionals.
And, while I can see how massive switch statements suck, normal conditionals are common in everyday life: "If they don't have a dark roast coffee, get me a medium roast."
All of which is to say, I really don't understand what he's getting at. The last example he gave seemed to make things even more complicated, and it basically renamed "true" and "false" to more descriptive things (forRealOptions, dryRunOptions), which seems to my untrained eye to boil down to the moral equivalent of a C enum.
"They had dark roast so I got you nothing as requested."
IOW, this program is either incomplete or wrong. Cf. "Get me the darkest roast they have." - ifless, concise, robust.
publish :: Bool -> IO ()
publish isDryRun =
if isDryRun
then do
_ <- unsafePreparePackage dryRunOptions
putStrLn "Dry run completed, no errors."
else do
pkg <- unsafePreparePackage defaultPublishOptions
putStrLn (A.encode pkg)
This would be nicer if you could do multiple functions with pattern matching. In Elixir this would be: @spec publish(boolean) :: any
def publish(true = _isDryRun) do
_ = unsafePreparePackage dryRunOptions
IO.puts "Dry run completed, no errors."
end
def publish(false = _isDryRun) do
pkg = unsafePreparePackage defaultPublishOptions
IO.puts (A.encode pkg)
end
Pattern matching is pretty powerful, even going as far to give a dynamic, non-statically types language like Elixir the ability to 'destroy all iffs' too. publish :: Bool -> IO ()
publish True =
unsafePreparePackage dryRunOptions >>
putStrLn "Dry run completed, no errors."
publish False = do
pkg <- unsafePreparePackage defaultPublishOptions
putStrLn (A.encode pkg)The only upside to pattern matching that I can see is that you are forced by the compiler to match all possible inputs and check for nulls in some languages, which can help you avoid null pointer exceptions and such. But you haven't encapsulated anything, or saved yourself any thinking or typing, by using pattern matching. You've basically turned every function into a switch statement. It's vastly overrated.
Suppose you wish to add a new branch case. Under the traditional if/else (or switch) model, you'd need to modify the function containing the if statements. With pattern matching, you simply introduce a new function; it decentralizes the change and acts as a sort of simple, intuitive polymorphism.
publishLive
publishDryRun
Which, of course, is not the point of the article either.https://bitbucket.org/iopq/fizzbuzz-in-rust/overview
I still had to bottom out at https://bitbucket.org/iopq/fizzbuzz-in-rust/src/9e5fcaabbd5f...
https://github.com/pblasucci/DeepDive_ActivePatterns
This feature allows to encapsulate conditional matching on arbitrary input and dispatching.
For those who know ML, it is making the concept of pattern matching extensible to any construct.
https://www.reddit.com/r/functionalprogramming/comments/4t91...
> The problem is fundamentally a protocol problem: booleans (and other types) are often used to encode program semantics.
> Therefore, in a sense, a boolean is a serialization protocol for communicating intention from the caller site to the callee site.
It is recommended that programmers use abstractions whenever suitable in order to avoid duplication, and associated errors
i see a lot of loop though, summation is so a double integral is loop within loop. i can't think a code analogue with derivative
fta, i take that if in function body makes an ugly code.
Other replier already gave you some common example, but let me add another one: signum(x), which returns if number is negative, positive or zero.