Anyone who likes this article on my blog should check out this presentation on applying this philosophy to build tools
https://youtu.be/j6uThGxx-18?si=ZF8yOEkd4wxlq84X
While the blog post is very abstract, the video demonstrates how to take the abstract philosophy and use it to help solve a very concrete, very common problem
I'm always surprised reading comments on these topics, people saying they don't grasp FP. But don't we use higher-order functions, closures, combinators all the time in most mainstream languages? How hard can it be to learn OCaml or Clojure for someone who use closures all over the place in JS?
Monads have a steeper learning curve, but besides Haskell, they aren't that pervasive. And there are constructs with similar flavor in mainstream languages too (result types in Rust...)
I recently gave a talk to some colleagues about that, which was divided into two parts: Practical functional programming patterns we can use today in our codebases (we use mainly C++, Python) and a more abstract part about pure functional programming.
The first part basically boils down to using functions as first class "objects", while the point of the second part was, that there can't be implicit state in pure functional language. The consequence of that is, that there is no strict order of execution, which is in direct contrast to imperative programming, which is all about list of statements, that are executed one after another.
I presented small code examples in Haskell and showed corresponding execution graphs to emphasise, that the compiler can easily optimise the execution order.
I like that POV, because it clearly distinguishes imperative from functional programming. Starting from there, it's also easy to understand the motivation behind monads or elaborate on architectural patterns like " functional core, imperatively shell".
$ echo "hello world"
hello world
I can do: $ "hello world"
hello world
Is this an indication that it is expression oriented? (just checking that I've understood the phrase).For example, while I can do FP in Rust, I would not really call Rust a FP language. It does have some features that make doing FP possible, but a lot of the code I see (and write) is a mix of imperative and FP.
But if I'm writing Scala, I'm going to be mostly writing FP code, and the language helps me there. Recent versions of Java make it easier to write FP code if you so choose, though you'll still see a lot of people write more imperative code in Java.
(I think if Rust had Scala's for-comprehensions, I'd write more FP-ish Rust. I know there are crates that emulate them, but they're not the most ergonomic, and I don't want to write what ends up being unidiomatic Rust code.)
I think if Rust had Scala's for-comprehensions, it would be a strictly worse language than it is now. FP langs' do- and for-comprehensions are highly unergonomic, not least due to all the ways it could be overloaded. Haskell has tons of extensions, Scala's is basically desugaring to flatMap calls. This has a lot of abuses in the wild, and with how it depends on implicits causes practical problems when you write that code.
Scala's new wave of direct syntax and general direction from Monads to algebraic effects is more sane in how it presents compiler messages, and opens ways to have better performance.
We just don’t call them monads.
We don’t really use them in mainstream languages. They exist as idioms for shallow application, but one doesn’t go full-on combinators in javascript, for at least performance reasons. What we do in mainstream is as FP as walking stairs compared to rope climbing.
That said I think deep FP makes no sense and only entertains bored intellectuals by being a subspecies of code golf.
- understanding paradigms of state handling enables you to make conscious choices about what to aim for, how to design your data model, etc
- learning the benefits of having all data as values, except at the edges of the system (instead of object refs encompassing mutable state and/or IO handles)
- eg the resulting ability to reason about your system in a principled way (this can still be hard to mentally reach, even given all the ingredients, if you're very used to the "poking the mystery ball" way of relating to your app)
- how it comes together so you can just test a subset of your code at your REPL, being able to paste the input datastructure there (because it's just values, not active objects) without having your app framework up and running in a stateful context in browser or server environment.
Another strength of FP is about paralell programming. The bottleneck of parallelizing code by using multiple threads is concurrency bugs which come from mutable data. Immutable data and referential transparency is close to a silver bullet for this. (But JS programmers aren't as likely to appreciate this since JS is nearly always single-threaded)
If you go to https://en.wikipedia.org/wiki/Functional_programming right now you will see "functional programming is a programming paradigm where programs are constructed by applying and composing functions" which is a bit of a ... it is quite hard to program without doing that. Even SQL is basically applying and composing functions. There are languages that are branded together because they have similar communities or techniques, but the brand isn't describing something beyond people thinking that things belong in a basket together.
It may be that different people mean different things when they talk about FP. I come from the Haskell corner and from my point of view that's a completely insane claim, at least when it comes to pure FP.
[ ] Are you writing a procedure that just processes its arguments into a return value? [ ] Does the procedure also operate on a non-static context? [ ] Does the procedure generate additional side effects?
If you answer yes|no|no, you are programming purely functionally.
The formal litmus test is the question of whether your procedure actually fulfills the definition of a mathematical function with regard to the relationship between the arguments and the return value. If it also does not generate any side effects, it is a purely functional procedure!
Then you can apply the principles of mathematical composition of functions to the construction of programs. Purely functional languages force you to program this way by default and to use special constructs for everything that has to do with side effects: IO monad, algebraic effects, The Elm Architecture.
How are you supposed to program like this? This is where (in addition to the concepts mentioned above) the higher-order functions that everyone knows come into play: map, fold, filter, flatMap, ... If you only know these and their use in impure languages, as if this were nothing formally thought through, but just a programming style for hipsters, then I can understand how the impression arises that there are no precise definitions here.
In practice, the concepts are often very distorted or not understood at all. For many programmers, OOP mainly seems to mean that you get the drop-down list with completions in Rider when you press “.”.
I have mentioned downthread [0] a few characteristics that distinguish FP from other paradigms - namely, first-class functions and higher-order functions (the ability to pass functions to functions); referential transparency, "pure functions" without side effect, and equational reasoning (the ability to replace expressions with their value and keep the behaviour of the program identical); and a tendency to prefer parametric and ad-hoc polymorphism (generics and trait/typeclass bounds) to subtype polymorphism, the "inheritance" boogeyman of OOP.
Again as mentioned in [0] the "expression problem" [2] is a good model for discussing the ergonomics and tradeoffs of FP vs. OOP.
[0] https://news.ycombinator.com/item?id=41450851
> "functional programming is a programming paradigm where programs are constructed by applying and composing functions"
This is only seems like a useless truism if you take 'function' to mean 'method' or 'procedure'. If you nail down 'function' to sameinput->sameoutput then it starts to make more sense.
I don't think OOP ever had nearly as strong of a mathematical/scholarly foundation as FP has
Consider whether the same input to your code will result in the same output (aka functions), that's a good place to draw the line.
You can stick 'FP' fancy types and closures into a method; that doesn't turn it into a function.
It's also important to teach the techniques together because functional programming tools are typically (deliberately) less powerful, which makes them easier to analyze, but means you need more tools.
FP never resonated with me and never fit the mental model I have for programming. I tried learning Prolog and Haskell a few times, and I never felt like I could reason about the code. This line from the article:
"[..] With functional programming, whether in a typed language or not, it tends to be much more clear when you've made a trivial, dumb error [..]"
wasn't my experience at all. In my experience, what made it clear when I made a trivial/dumb error was either having good typing present, or clear error messages.
I do always try to use the aspects from it that I find useful and apply them when writing code. Using immutable data and avoiding side effects when possible being the big ones.
I'm glad FP works for the blog author - I've met a few people that say how FP makes it easier for them to reason about code, which is great.
I find FP concepts easy enough to grasp (provided they’re not demonstrated in e.g. Haskell) and even adjacent stuff like typeclasses or monads or what have you aren't a stumbling block, and I'm plenty comfortable with stuff like recursion.
… but I'm firmly on the language-is-more-natural-for-me side of things, and find non-algorithmically-oriented math writing incredibly difficult to follow—I have to translate it to something more algorithmic and step-oriented to make any headway, in fact. I find languages like Haskell nearly illegible, and tend to hate reading code written by dedicated FP fans in just about any language.
While Monads and co. are interesting constructs, I think the main thing with FP (pure or not) is immutability by default.
That alone makes code so much easier to think about, in my experience.
One can do FP in languages not commonly associated with FP by just not (re)assigning variables. FP languages just make it increasingly hard to do so.
Ability to reason about expressions means everything can be run and have a result without side effects (unless you allow, say, lisp effectful builtins). The fact that everything is built around function means you can always unplug or compose them. All this with a very reduced set of ideas and syntax (at least for the core)
On the other hand most imperative languages required you to pay attention to state, which is rapidly a mental dead end, with a lot more ceremony and syntax. At least before the 2010s .. nowadays everybody has expression oriented traits and lambdas.
After learning ml/haskell and doing interesting things with a kind of clarity.. I tried going back to c/python and suddenly a lot of errors, issues and roadblocks started to appear.
Then you have the parallel case.
Over time, I have slowly become better at functional programming. But I'm now worse with proofs and essays due to the lack of practice.
It's an annoying trade off, to be sure. With a language that makes writing FP code ergonomic and idiomatic, though, I'm usually going to choose to write that way.
But even in languages where writing in an FP style is a chore (if it's even possible at all), you can take some lessons. The C I write today is more "functional" than 25 years ago, when I'd never even heard of functional programming. I try to avoid global state and mutation and side effects, and write more pure functions when I can. I think about my programs as transformations of data, not as a series of instructions, when I can. My C code today is not very FP at all when you'd compare it to something written in Haskell or Scala or whatever, but it's, IMO, "better" code than what I used to write.
In 2022 I went back to a C-based open source project that I used to work on heavily in the mid-'00s. Reading a lot of that old code makes me cringe sometimes, because the details at the micro level make it really hard to reason about behavior at the macro level. As I've been working on it and fixing things and adding features again, I'm slowly morphing it into something easier to reason about, and it turns out that using functional concepts and idioms -- where possible in C without having to go into macro hell -- is essentially what I'm doing.
You're probably thinking about haskell. It's like this because of the type checking combined with the FP makes haskell especially robust.
That being said FP is programming nivana. The FP function is the most modular unit of computation in CS. By writing an FP program composed of functions you have broken down all sections of your program into the smallest form by definition.
It is not. FP programs measurably have at least as many defects as non-FP programs.
There is literally no reason to subject yourself and, worse, your users, to the garbage of FP.
What would also explain why Prolog has none of those benefits. If you don't use the extra information, it can't do any good.
But if it is really just that, it can be replicated over imperative languages. Anyway, Rust is evidence that there is something to that idea.
Which is unfortunate, as I like the principles behind it. I wonder if someone will ever write a Rust-like language that has a syntax closer to Java or Haxe.
However, FP's benefits can be overstated, especially for complex real-world systems. These systems frequently have non-unidirectional dependencies that create challenges in FP. For example, when component A depends on B, but B also depends on a previous state of A, their interrelationship must be hoisted to the edges of the program. This approach reduces races and nondeterministic behavior, but it can make local code harder to understand. As a result, FP's emphasis on granular transformations can increase cognitive load, particularly when dealing with these intricate dependencies.
Effective codebases often blend functional and imperative styles. Imperative code can model circular dependencies and stateful interactions more intuitively. Thus, selectively applying FP techniques within an imperative framework may be more practical than wholesale FP adoption, especially for systems with complex interdependencies.
Does it increase cognitive load, or is it just making the cognitive load more apparent. Sure it's easier to write multithreaded code if you assume race conditions can't happen, but that's not actually accurate to what would happen.
Perhaps FP just makes explicit in the typing/coding portion, what would otherwise be uncovered hours/days/weeks later in a bug?
These aren’t strictly necessary for effectively using functional techniques in an imperative environment, but they can go a long way toward providing useful guardrails you don’t have to figure out yourself.
However, circular dependencies can be represented with lazy (aka non-strict) references and deferred function calls (aka thunks / call-by-name), and are IMO easier to reason about than mutable imperative techniques for representing such relationships. They also have the advantage of being totally thread-safe.
The OP (Lihaoyi) is the author of an important set of Scala libraries, and Scala is an example of what you are describing. Scala is a hybrid OO / FP language that is not dogmatic about purity. You can be totally imperative if you want.
It is common in the Scala ecosystem to implement performance-critical library code using local mutability and null references. Internally the function is imperative; to the caller, it is functionally pure.
Maintainable code? I dunno. From what I’ve seen, FP is much worse for changing when you have shit deep in your call stack. I wouldn’t call that maintainable.
More readable? Nah. Nearly everyone has a much easier time consume code when they’re not constantly context switching between function jumps all over the place.
Additionally, FP only makes threading “easier” in one very specific circumstance. Once you need to parallelize the same workload, FP is generally much much harder to thread, whereas this is trivial in other languages.
To me, all of this could be summarized by "no side effect".
whip(cream)
that needs to be under flow control, you have whipped_cream = whip(cream)
describing the flow of data. Data flow describes the relationship between non-changing entities and thus there are no side effects.While they could have mentioned it, it wouldn't really change the message.
To me (someone who really doesn't like FP as a religion), FP is about function application: everything should just be "pure" function calls taking values (not references) and returning values; immutability is indeed a corollary of that, static typing really isn't (isn't Church's original LC untyped, btw?).
* If you don't have side effects, you can't do anything.
* Haskell can do things? Then it has side-effects, so it's not functional.
* Computer getting hot is a side-effect.
* i++ is not a side-effect, because I intended to do it.
For me, the essence of FP is the minimization of global state
What's Functional Programming All About? - https://news.ycombinator.com/item?id=15138429 - Aug 2017 (139 comments)
What's Functional Programming All About? - https://news.ycombinator.com/item?id=13487366 - Jan 2017 (2 comments)
Eggs.mix()
.whip()
.beat();
There's even a term for it - 'fluent API' - described as:> Fluent API is a way of implementing object oriented API in a way that it provides readable code. [https://www.progress.com/documentation/sitefinity-cms/for-de...]
Writing functions FP is essentially all about returning results from a function, which is proof that that a computation has occurred. If you don't have that return value, that proof, then obviously the rest of your code can't and shouldn't continue, and FP makes it obvious compared to more traditional imperative approaches.
And this idea extends into so many other things that people consider core, or at least originating from FP. Result/Option types come to mind, making the possible failure of "proof of work" explicit in the type signature, so people are forced to consciously handle it. It also leads into the whole idea of type driven design, one of my favorite articles, "Parse, don’t validate"[1], describes this as well, making types that clearly restrict and set the expectations needed for the "proof of work" to always be returned from a function.
[1] https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
This assumes no contention on a limited number of bowls or having only one kitchen tool for beating or whisking etc. :P
I point that out not to demand that the metaphor be bulletproof, but because I think it may help explore something about state-handling and the FP/imperative split.
How might this tiramisu-tree model change if we were limited to N bowls and 1 beater and 1 whisk, unable to treat them as statelessly-shareable or endlessly-duplicable?
I have yet to see a more practical, elegant solution than Haskell's STM for this: https://www.adit.io/posts/2013-05-15-Locks,-Actors,-And-STM-...
None of those things are exclusive to fp. They are just tricks developed by fp.
Here’s how to get a high level characterization of what fp actually is:
You know how in math and physics they have formulas? Formulas for motion, for area, etc.
Functional programming is about finding the formula for a particular program.
Typically when you code you write an algorithm for your program. In functional programming you are writing a formula. Think about what that means.
The formula has side effect benefits that make it easier to manage complexity and correctness. That’s why people like it. These side effects (pun) are not evident until you programmed with fp a certain amount.
Obviously though people naturally reason about things in the form of procedures rather then formulas so fp tends to be harder then regular programming.
Can be disproved trivially. Anyone can code up a program that generates the nth prime for some n, by iteratively accumulating n primes starting from 2 using trial division. otoh, an actual formula that produces the nth prime would be an earth shaking event.
So I ask you... how does a functional program do the algorithm you ask for above?
Recursion and the ternary operator. All iterative programs can be written in recursion and vice versa.
Another way of thinking about this is rather then formula, think expression. Functional programming is about coding up an expression that can fit on one line.
Anything that breaks the program into multiple lines means you're turning your functional expression into a list of procedures.
It's always somewhat possible to express what are typically considered imperative features in a functional fashion (e.g. monad), or bend imperative languages to behave somewhat like functional ones: I think the differences become clearer once we reach out for the underlying theoretical models.
Erlang & Elixir too.
I “grasp” FP, but that pretty much sums up my experience with it. Half of its promises it delivers in the form of “you got used to read a multi-faceted inside-out mess”. I think FP is actually harmful, because it masks the fact we don’t properly teach regular programming.
Your quote is specifically about the code formatted in such a way, with arrows drawn all over it, to match the box diagram. But if you look at the code as it would actually be written, in a functional style:
def make_tiramisu(eggs, sugar1, wine, cheese, cream, fingers, espresso, sugar2, cocoa):
beat_eggs = beat(eggs)
mixture = beat(beat_eggs, sugar1, wine)
whisked = whisk(mixture)
beat_cheese = beat(cheese)
cheese_mixture = beat(whisked, beat_cheese)
whipped_cream = whip(cream)
folded_mixture = fold(cheese_mixture, whipped_cream)
sweet_espresso = dissolve(sugar2, espresso)
wet_fingers = soak2seconds(fingers, sweet_espresso)
assembled = assemble(folded_mixture, wet_fingers)
complete = sift(assembled, cocoa)
ready_tiramisu = refrigerate(complete)
return ready_tiramisu
...then surely you would agree that that doesn't look like a mess at all, compared to the imperative version a couple of lines below it? And yet, it brings all the benefits described in the following paragraphs!> Languages like Java encourage patterns where you instantiate a half-baked object and then set the fields later.
Maybe it did before 2010. For many years everyone prefers immutable objects (so that object and its builder have different types, or in simpler case -- no setters, only constructor initialization). You can see it in pretty much all frameworks.
I'm ok with both functional and procedural languages, I just think this article is not about functionality. Come on, FP is all about monads!
Moreover, the "imperative" code example would be impossible anyway with immutable objects without side effects. So what I think the article is about is immutable data types. Everyone agrees that immutable are better, we have to do mutables only when we're optimizing for CPU/RAM.
And BTW concurrency is typically easily achieved if variables were not plain objects, but Future/Observable/Flow/Stream -- pick your poison. They all have passed the hype hill already, and became "just a tool" I think.
(sorry, bitter personal experience) And yes, that is explicitly "modern" Java code written by a lead engineer and "java champion" in 2023.
In FP the fundamental unit of composition is the function; your solution is expressed as a composition of functions, which are treated as first-class objects (first-class meaning, functions can be manipulated via higher-order functions, or "functions of functions"), a feature not always seen in object-oriented or multi-paradigm languages. Polymorphism is most frequently achieved through parametric polymorphism, or "generics," and ad-hoc polymorphism, or "trait bounds"/"typeclass constraints".
In OOP the fundamental unit of composition is the object; your solution is expressed as a composition of objects, whether through actual composition/aggregation (objects containing and possibly delegating to other objects [1]), or subtype polymorphism, also known as "inheritance." Parametric and ad-hoc polymorphism can often feature in OOP languages as well, but subtype polymorphism is a distinguishing characteristic of OOP.
Functions, particularly pure functions without side effects in the "real world" such as I/O or hardware access, are akin to equations in which an expression can be replaced by its value - the "left-hand side" equals the "right-hand side." Mutable state often does not enter into the picture, especially when programming in this "pure" (side-effect free) style. This makes functional programs easier to reason about equationally, as one can determine the value of an expression simply by inspection of whatever variables are in the function's scope, without having to keep track of the state of the entire program.
Objects are distinguished by their often stateful nature as they bundle together data/internal state, and operations over that internal state. Often such internal state is hidden or "encapsulated" from the client, and the internal state is only modifiable (if at all) via the object's class' set of public methods/operations. Objects with immutable internal state are more akin to closures from functional programming - that is, functions with access to a "parent lexical scope" or "environment."
Between the two extremes exists an entire spectrum of mixed-paradigm languages that incorporate features of both approaches to structuring and modelling a software solution.
[0] https://wiki.c2.com/?ExpressionProblem
[1] https://en.wikipedia.org/wiki/Composition_over_inheritance
I have refactored large enterprise systems, and always tend to write in an FP style depending on the language. Im not talking about full blown Haskell here, but traditional C like languages.
Just keeping it simple, avoiding mutation, isolate side effects and having pure functions will take you a LONG way for better software.
You dont have to go balls deep in category theory for FP, and usually the language you use is not really suited for a monadic way of things, i dont like to shoehorn that stuff in if im not working with something like ocaml etc.
That's not right. The difference between data and control flow oriented programming is the difference between laziness and eagerness. The difference between imperative and functional programming is largely one of abstraction, not a feature in itself.
The genuine core feature of FP is the separation of data and methods, that stands in contrast not to imperative programming but object oriented programming, whose core feature is fusion of data and methods. Functional programming tries to address complexity by separation of concerns, pulling data and methods apart, OO tries to address complexity by encapsulation, pulling data and methods into purely local contexts.
This is also where the association between FP and static typing comes from that the post briefly mentions. Pulling data and functionality aside lends itself to programming in a global, sort of pipe based way where types act like a contract between different interacting parts, whereas the information hiding of OO lends itself to late binding and dynamic programming, taken to its most extreme version in say, Smalltalk.
ML, one of the standard-bearing functional programming languages, is at least partially defined by its powerful module system. And an ML module serves a similar sort of encapsulatory purpose as the class does, often binding together a type and functions that operate on it - the internals of the type sealed away such that only those functions can operate on that data.
Yes, because the closure solves a problem in functional programming by injecting a bit of OO. Closures are just stateful function objects with a single call method operator where the local context serves the same purpose as private variables in an object.
It's exactly because FP otherwise lacks the coupling of data and methods that closures are so important, and it's why, the other way around, in languages where functions are literally first class objects, you achieve that through closures.
Couldn't disagree more. Based on this definition C language would be the epitome of functional programming... but it is not.
> The genuine core feature of FP is the separation of data and methods, that stands in contrast not to imperative programming but object oriented programming, whose core feature is fusion of data and methods
If you wrote this in a book and gave it to me as a way to learn what FP is, I'd be pretty pissed if I paid for that book is all I'm saying.
Also, what in the actual fuck are you on about? FP is many things and some of them are less enforced than others depending on implementation, but I'm pretty sure in each FP book I've looked at has mentioned exactly this idea of data-flow. So either you are an amazing genius who sees what others can't, or you are just generating nonsense. Either way I don't care, HN is kind of a joke to me anymore.