So what flavor of functional programming one might ask, since javascript is a dynamically typed flavor that is ubiquitous nowadays? The fine article suggests drum roll... Haskell! The author believes a statically typed and lazily evaluated language is what we all should be using. Unlike the various other dynamicaly typed options like small talk or scheme or lisp. Standard ML and Ocaml being statically typed are eagerly evaluated.
Most popular languages have added a lot of functional features in recent years so it’s clear the world is moving that way, imo.
That was my first thought. I work mostly in Java because that's what they pay me to do, but I've almost never worked with a Java programmer who could actually write Java code using the OO features that the language is based around. When I see their Scala code... it's mostly var, rarely val, because it's easy to think about.
I don't understand this. The language is based around primitive, flawed, simplistic OO features, right? Like "class Dog : Animal"? I never write code like that either, because it's bad practice. But you're saying they can't write code like that? Or that they don't use classes at all? How can you even write any Java that way?
But my code is flush with collection.stream().filter().map().collect() etc. I was initially critical of it in code reviews (coming from C), but have been totally converted.
Anecdotally, the tooling is why I gave up on Ocaml (given Rust's ML roots, I was seriously interested) and Haskell. I seriously couldn't figure out the idiomatic Ocaml workflow/developer inner loop after more than a day of struggling. As for Haskell, I gave up maybe 20min in of waiting for deps to come down for a Dhall contribution I wanted to make.
Institutionally, it's a hard sell if you need to train the whole team to just compile a project, vs. `make` or `cargo build` or `npm install && npm build`.
Haskell is a poor language to be doing mathematics in.
I’d say the majority of people working GHC are software developers and CS researchers. They’re a friendly bunch.
What’s holding back tooling is that the developers of the de-facto compiler for Haskell are spread out amongst several organizations and there isn’t tens of millions of dollars funding their efforts. It’s mostly run by volunteers. And not the volume of “volunteers” you get on GCC or the like either.
That makes GHC and Haskell quite impressive in my books.
There are other factors of course but the tooling is improving bit by bit.
The whole “Haskell is for research” meme needs to go into the dustbin.
Do you have any references for the "Rust is heavily influenced by FP" thing? To me it does not feel that much FP. I have (for now) given up writing FP like code in Rust. ML-influence -- Yeah maybe, if I squint a bit.
It does, which is why you need tooling to get out of the way and let you actually learn. Working out obscure tooling commands to build a hello world app then having to grok the error messages absolutely destroys the learning loop.
For rust it took me approximately 3 minutes from scratch to install, bootstrap a project and run the hello world CLI. The rest of the day was spent purely, 100% learning rust. Not Cargo.
20 minutes to install some beginner level dependencies, presumably with little feedback as to what is going in? Dead.
I'd do the same. It's 2022. There are so many options without this friction, why would you fight your way through it?
If either language had some magic power or library, that'd be one thing, but their only selling point is the FP paradigm, which is only arguably somewhat better than what other languages do. Not only that, but most other languages let you do FP to varying degrees anyway.
The original implementation of Rust was in an ML dialect (I think OCaml?), so from that we know immediately that the original authors were familiar with FP and used it for their own purposes. It seems odd, then, to assume that there would be no influence of their own language.
But if we look at the actual feature set, we find a lot of things that previously belonged almost entirely to the realm of FP. The type system as a whole has a fairly FP feel, plus algebraic data types, exhaustive structural pattern matching, maps and folds, anonymous functions, the use of the unit value for side-effecting work, the functioning of the semicolon operator (which is identical to the OCaml usage)... there's quite a lot, and those were just the examples off the top of my head!
User experience matters, and developers are ultimately users.
I can absolutely understand abandoning such an effort if one's earliest interactions with the tools and/or ecosystem are very unpleasant.
either get those initial minutes right or lose me
Additionally most prominent FP projects are old. Both Haskell and Ocaml date from a time when UX expectations around language tooling were much lower (think C++.) The inertia around the projects never cared much for UX anyway so now in 2022 when languages like Rust and Go have raised the floor of expectation for PL tooling, Haskell and Ocaml struggle to keep up.
Programming is applied mathematics. An assignment is not 'sloppy' like the article posts, it is just another kind of operation that can be done.
A proof is very much like programming, except you are also the parser, the compiler, and the (comparatively slow) computer. Learning to write proofs helps immensely with learning how to program.
We should strive to make our proofs and programs easier to understand, no matter the paradigm.
Visual Studio is great, but if you're not on Windows, your only practical choices are VS Code + Ionide (I was a sponsor for a while; ultimately lost hope), or JetBrains Rider, which is powerful, but heavy.
Comparing my 10+ years focused on C# with ~5 years focused on F#, I was ultimately more productive in F#. But:
1. Tools for refactoring and code navigation were better for C# 2. Testing was more predictable with C#; I often just tested from the CLI with F# (so much love for Expecto, though) 3. Paket and dependency management between projects caused days of pain, especially during onboarding
It’s a great language— maybe my favorite, but the tooling stinks if you’re not using VS. I’m not switching to Windows, so that leaves me in limbo.
Today the Haskell example is just `cabal install --only-dependencies && cabal build`.
I think .Net has got it right. And dotnet-script [https://github.com/dotnet-script/dotnet-script] has been a game-changer for me with a REPL-like experience for unit testing and writing command-line utilities.
And ATS is pretty hard (unlike C, C++ and Rust). I think it will take a while until linear & dependent type languages will hit mainstream. Rust already succeeded in that regard, so it's a great stepping stone.
When we build something with lambda calculus as its core, you might want to revise that opinion.
Is it really? I agree with the rest of your post, that Rust provides great tooling, but not sure it's "heavily influenced by FP", at least that's not obvious even though I've been mainly writing Rust for the last year or so (together with Clojure).
I mean, go through the "book" again (https://doc.rust-lang.org/book/) and tell me those samples would give you the idea that Rust is a functional language. Even the first "real" example have you mutating a String. Referential transparency would be one of the main point of functional programming in my opinion, and Rust lacks that in most places.
It became less and less ML-like as time went on but it still as a ton of features it inherited from Ocaml and Haskell: variant types, pattern matching, modules, traits come directly from type classes, etc.
Mutability is a significant part of Rust, but it's much more sharply curtailed than any non-FP language I've ever seen. To be allowed to mutate something, you have to prove that nothing else holds a reference to it. That means that any code that doesn't use mutability itself can pretend that mutability doesn't exist.
Rust is not a "functional language" in that sense, but that was not the claim made, which is that Rust is heavily influenced by FP. This is most clearly seen in the trait system (typeclasses) and iterator patterns.
Influence doesn't mean you're doing exactly the same thing. If I make a rock band influenced by classical music, that doesn't mean I'm doing classical music, but I'm still very obviously influenced by it.
I have gigabit internet and I’m lucky if some package manager can get more than a couple of megabits of throughput.
Most industries would never accept less than 0.5% efficiency, but apparently software developers’ time is just too expensive to ever be “wasted” on frivolous tasks like optimisation.
I kid, I kid. The real problem is that the guy developing the package manager tool has the package host server right next to him. Either the same building or even a dev instance on his own laptop. Zero latency magically makes even crappy serial code run acceptably well.
“I can’t reproduce this issue. Ticket closed, won’t fix.”
Rust is the same, you can even define a nightly version if you want, so even the correct version is ran with rustup. It's fantastic, and I can contribute much easier to projects without worrying about tooling.
that is because there is no such thing.
Even refactoring was easier because the types are sometimes left to be inferred and not named everywhere. The type inference in F# being weaker also even helps with both compile speed and readability where annotations are needed both to help the compiler and the reader.
Perhaps on larger projects other things become important, but I got the sense that it's on the devs to name things well, use type annotations where helpful, and otherwise document non-obvious aspects.
Most tooling issues are pretty minor for small apps, it’s once one employs scores of developers that the lack of tooling begins to hurt (and by hurt, I mean cost money).
I tend to agree but, compiling C++ isn't just about typing "make". And it did take me more than one day to figure out python/js workflow.
I think different people have different wants and needs with tooling. I make (and use) binaries with Haskell. I wish more mainstream languages could make binaries.
My rule of thumb is anything that needs state should be in a class, and anything that can be run without state or side effects should be a function. It is even good to have functions that use your classes, as long as there is a way to write them with a reasonable set of arguments. This can let you use those functions to do specific tasks in a functional way, while still internally organizing things reasonably for the problem at hand.
The minute people start trying to force things to be all functional or all OOP, then you know they've lost the plot.
[] I have been wanting to learn lisp for over a decade, I just never get around to it.
But, adopting OOP doesn't mean one have to give up on pure functional paradigm, there's a book which basically is talking about how you can incorporate more functional paradigm into OOP.
One way to look at it is (perhaps trivially) that methods are just functions, and class is just something with properties. So in a sense class defines a type, where the methods/functions expects this type.
In python I find properties (and the related cached properties), dataclass etc can really make the above more apparent in construction. To just give one example, having a property that returns a pure function would acts practically the same as methods (but not in docs unfortunately, and related auto completion kind of stuffs.)
Yet another way of looking at this is to treat Python class as singular dispatch in first argument only.
I find thinking this way enables me to reap the benefits of both OOP and pure functional paradigm.
The thing about functional programming is that the confidence you get from immutability comes at the cost of increased memory usage thanks to data duplication. It's probably going to create a ceiling in terms of the absolute performance which can be reached.
There are just other ways to solve the problems like NPE's and memory safety, like ADT's and ownership rules, which don't come at the same costs as FP.
If you track ownership in functional languages, you can statically determine if a value is used more than once.
If it’s only used once, you can apply any updates to that value in place without allocating more memory.
This gives the performance benefits of mutability with the safety benefits of immutability, in some common cases.
The main trick is adjusting memory layouts accordingly. You can keep it simple by only applying this optimisation for functions of A -> A, or if you’re replacing it with a different type you can analyze the transformations applied to a value and pre-emptively expand the memory layout.
If a value is likely to be used only once, but might be used multiple times, you can also apply the same approach at runtime by reference counting and updating inplace when there’s only a single reference (for functions of A -> A at least).
I believe the Roc folks are aiming to have aspects of this functionality, and I also believe there’s similar stuff becoming available in Haskell under the guise of linear types.
Finally, if you really need a shared mutable value, that can be achieved with mechanisms like the State type in Haskell.
In short, the pieces are there to create a functional programming language that doesn’t introduce needless memory usage overhead, but I don’t think anyone has put all the pieces together in a convenient and accessible way yet.
As a simplified example: one thread modifies the data, and another thread writes a snapshot of the data.
In the programs I write, it would pretty much never benefit from this optimization.
For example, Dart 2 is null-safe! No one in their right mind would claim Dart is a FP language. Even Java can be null-safe if you use some javac plugin like the Checker Framework.
Also, a language can totally be functional and yet suffer from NPE, like Clojure or Common Lisp, but I suppose the author may be forgiven here because they are talking only about "purely functional programming languages"... (they didn't mention "statically typed" though, but that's clearly implied in the content)...
I believe the author is inadvertently pushing for two things that are pretty much unrelated to FP, even if they are a requirement in most purely-functional languages:
* immutability
* strong, static type systems
I would mostly agree with both (except that local mutability is fine and desired, as anyone trying to implement a proper quicksort will tell you - also, see the Roc language[1], which is purely functional but uses local mutability), but I write my Java/Kotlin/Dart just like that and I wouldn't consider that code purely functional. I don't think whether the code is purely functional actually matters much at all compared to these 2 properties, which can be used in any language regardless of their main paradigm.
The code we work on in the 2020s is much, much more complex than code written 20 years ago. We need better primitives to help our weak and feeble brains deal with this complexity. FP (particularly pure FP) gives us that. It isn't a panacea, but it's a major step in the right direction.
As programmers, our job is to not to play with abstractions, it's to move electrons and make hardware do things. We can't afford to abstract away the complexity of the hardware completely. Indeed the trends in PL popularity of the past 20 years have been to move back closer to the hardware, and away from highly abstracted environments, like scripting languages and JVM.
I have noticed a lot more ops complexity and additional library usage, but not complexity in the code I am responsible for.
With strong immutability like in Haskell, you can share values even between threads and can avoid defensive copying. Two versions of the same immutable tree-like data structure can also share part of their representation in memory.
(Haskell has other problems causing increased memory usage, but not related to data duplication in my mind)
I suspect it has something to do with the perceptions around always returning a new thing from a function rather than the mutated input to the function. For example, if you need to mutate the property of an object based on other inputs, the perception is you would clone that object, modify the property appropriately, and then return the cloned object.
[Edit: formatting]
Not that there aren’t ways to represent mutability in Haskell, just that the de facto use of immutability causes excess allocation.
Ironically whay you say is true in theory, but not true in the real world.
Source: real world Senior Haskell programmer
It's incredibly frustrating when you work in a functional language, and yet it's the main benefit (no side effects).
I'd like to have a language that is imperative when written and functional when read :)
Depending on which language Monads give you exactly that bridge between imperative and functional.
In your example, you can always choose to have side effects in that deep call.
Personally, I like a type-driven approach. Then, I do not care so much where functions are (they will be grouped logically, but could be anywhere), as long as the type in and the type out matches.
Turning a deep call tree into a flat pipeline is a popular other approach and often leads to less complexity, better compos-ability.
Moreover, persistent data structures can be optimized well (see Clojure), so that the performance issues may be relegated to the inner loops, as usual, and the rest of the program may use safe-for-sharing data structures.
The compiler/runtime could optimize such that memory is reused, as long as all other laws are obeyed.
I'm not saying that in-place operations don't have their uses, but to me, these use-cases look more and more like niche cases, as opposed to being the default choice in the past. That is well-reflected in new languages like rust, where a new name is by default immutable, and must specifically be flagged as mutable in order to be variable.
It means that the benefits of immutability and the performance tradeoffs people are willing to take are evolving. I would assume larger codebases and faster hardware means performance is less valuable, and clarity comes at a premium.
I do agree with you that NPEs and memory safety are unrelated problems, though.
If anything, I would say explicit mutability allows us to decrease data duplication. By being explicit about what's mutable and what's not we don't have to resort to holding "safe copies" to ensure shared memory is not being mutated unexpectedly.
In all my programming years, 20+ years that is, I've met hundreds of programmers, and 95%+ of them handled imperative programming languages just fine, with very few actual bugs coming from each one.
Each time there is such a conversation, I have yet to see some actual concrete proof that functional programming provides a substantial increase in productivity over well-implemented imperative code.
In other words, I am still not convinced about the merits of functional programming over imperative programming. I want some real proof, not anecdotal evidence of the type 'we had a mess of code with C++, then we switched to Haskell and our code improved 150%".
Lots and lots of pieces of code that work flawlessly (or almost flawlessly) have been written in plain C, including the operating system that powers most of Earth (I.e. Unix-like operating systems, Windows etc).
So please allow me to be skeptical about the actual amount of advancement functional programming can offer. I just don't see it.
The opinions of programmers who have only used one paradigm are less than worthless, since they are demonstrating a basic lack of curiosity and lack of willingness to invest in their craft.
You can always find excuses not to learn.
My viewpoint, as a functional programmer, is software is a young field.
We don't know much of anything about it.
We're living in the first 100 years after the Gutenberg printing press.
All I know is NLP code generation will be a dramatic change.
Everything else, we are still figuring out.
The reason why multiple paradigms exist is because here in the real world, the competing issues and constraints are never equal, and never the same.
A big part of engineering is navigating all of the offerings, examining their trade-offs, and figuring out which ones fit best to the system being built in terms of constraints, requirements, interfaces, maintenance, expansion, manpower, etc. You won't get a very optimal solution by sticking to one paradigm at the expense of others.
One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion. (The other major impediment is gatekeeping)
> One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion.
This is never really a reason for anything, it's a personal attack on people who advocate the thing that you don't want to do. FP people are not bullying you, or shoving anything down your throat.
1. There's no dichotomy between FP and OOP. Sprinkle it into places where it makes sense to you. Adoption can come by degrees.
2. Just thinking in composable, pure functions on projects buys you serious mileage. You don't have to wrangle monads (or worse, victimize team members with advanced FP ideas). Just KISS.
FP often feels to non-practitioners like a pretentious and byzantine fad. We'll have better luck with widespread adoption if we can be inviting and dispel those negative associations. We'll seem less zealous if we can frame FP as a practice to dip one's toes into, rather than a religion to be submerged & baptized into.
Have you ever worked with a FP evangelist? Every single experience I've had has been with a person who simply won't take "no" for an answer. The impatience, ego, and pettiness is bar none, really.
Aside from that, there are objective reasons to be skeptical: it's inefficient when it comes down to actual implementations that have to work on actual computers and solve actual problems ("but but tail recursion and copy optimizations lolol you moron you just don't understand it"). Also, most FP tends to become much more complex than an equivalent imperative program solving the same problem, usually through a terrible confluence of code density and mazes of type definitions.
FP advocates have a reputation for being zealous and pretentious. The only question is whether they’re right to be.
First it greatly depends on what "new project" mean here. There myriad of new projects within existing code base for which the existing used technologies are the major factor.
When it’s about creating a brand new product out of a blank slake, sure Cobol won’t be your first idea for a startup. Maybe Ada might come to mind if some high level of reliability is required, like in aeronautics and the like.
It also depends on the team skills. If you have team mates which are all Haskell advanced practioners, surely imposing J2EE for the next big thing might not be brightest move in term of crew motivation.
I'm not considering Erlang either.
What's interesting to notice about this thread is how many messages are just oozing with smug superiority and disdain for anyone who doesn't share their knowledge. Yes, some are from genuinely humble and even-handed FP practicioners, but when we look at people who are vocal about FP, this small example shows around 90% of them in the gatekeeper camp. And the punchline is I don't think they can even see what they've become. This is what adds to the insidiousness of it all: They genuinely believe themselves to be helpful and positive.
This is a huge cultural problem, and one that I've seen many times in other communities over the years.
Take Linux, for example. Nowadays, it's trivial to get up and running with Linux, but it wasn't always so. Back in the 90s the installation was tricky, to say the least. The end user tooling was iffy at best and buggy as hell, there were tons of sharp edges, but most frustrating of all were the gatekeepers. People can handle challenges and pitfalls, but nothing quite takes the wind out of their sails like a smug asshole belittling them when they ask for help or express frustration.
Similarly with video codecs. In the 2000s when video codecs were a new thing, they had so many obscure options that almost nobody knew what would produce a good result, and so the gatekeepers came out of the woodwork to hold the sacred knowledge prisoner. They brought us byzantine software like the appropriately named Gordian Knot, and once again the forums were full of smugness, arrogance, and abuse as people despaired over how the hell to rip their DVDs into something viewable.
And it's similar in the Nix community, and countless others I've observed over the years.
In my experience, gatekeeping goes hand-in-hand with poor tooling and educational resources. The worse the UX, the more gatekeepers it attracts (because gatekeeping fulfills a need they have, so they flock to gatekeeping opportunities). Linux used to require an understanding of init scripts and environment variables and bash and crontabs and kernel recompiling and all sorts of esoteric knowledge just to get started. But now with mature tooling, it's easy to get started and then dig deeper at your leisure via the wealth of newbie friendly articles littering the internet.
Elixir is a functional programing language (It's not pure like Haskell or PureScript, but thats besides the point.)
The Elixir community is absolutely awesome! All or welcome.
There are soooo many resources for people to learn FP, the gate keeping thing may have been true ten years ago but I certainly don't think that the case today.
There's a catch with FP, where many of their implications impose you to write better code. But it's not mandatory to go the functional way to do that, and I always try to explain the importance of those fundamental principles to other people, and they can try to implement it in whatever style they desire.
But when someone just doesn't want to learn, they always see me with a smug and gatekeeping attitude and I can't help but see them as people that just don't care at all.
That's exactly what worked for OOP—have we collectively forgotten just how hard OOP was pushed everywhere 15–20 years ago? Way more aggressive than any FP advocacy I've seen, and I've seen a lot. When I was learning to program every single book and tutorial pushed OOP as the right way to program; procedural programming was outmoded and functional programming was either not mentioned or confused with procedural programming.
I still have to deal with the fallout from OOP evangalism; I've had colleagues who unironically use "object-oriented" as a synonym for "good programming" and managers who believe design patterns are software engineering.
Eh, I doubt it. I've encountered a few Haskell snobs in my time, but most people that I know who use FP do so completely silently and will discuss the merits and (and challenges) with you freely. I think the real issues with FP are lack of good tooling, package management, and being a major shift in thinking for most developers. It's a common theme for someone to say that learning FP (seriously) was the most impactful thing they've done to improve their software chops.
> The other major impediment is gatekeeping
??? what? I guess you could argue that pure vs non-pure functions are gatekeeping, but there are absolutely legitimate benefits to pure functions that basically everyone can agree on.
And it's not just the statement. It seems to me (from my outside perspective) that category theory is often used in a gatekeeping way.
In contrast, take SQL. How much of the mathematical theory of relations do you need to know to be able to write SQL queries? Yes, it might help, but the SQL gurus don't try to drag you into it every time you get close to a database.
This is a part of it. The other part is FP's a bit of a mind fuck if you're used to procedural programming. Take a classic example: Haskell. To do anything remotely productive it's advisable to understand how monads and the various monad design patterns fit together. This difficulty is further compounded by monads having it's foundation in category theory, a branch of mathematics. But hey, they're just monoids in the category of endofunctors, it can't be that hard ;) You can rely on `do` syntax a lot but it really helps to learn how they work.
> I immediately distrust any article that makes sweeping claims about one-paradigm-to-rule-them-all.
I get where you're coming from but this is the end state as far as I'm concerned. Pure FP is where we're all headed. I am convinced the more we try to make mutable state, and concurrency safe and correct, the more FP concepts will leech into future languages.
Rust was one of the first steps toward this. The language is procedural but a lot of the idiomatic methods of doing things are functional at heart. Not to mention most of it's syntax constructs return values. You can still mutate state, but it's regulated through the type system and you can avoid it if you really want to.
Rust's enums, arguably one of it's killer features are algebraic data types and they're combined together the way you would use monads in Haskell. This not only avoids null typing (if you ignore ffi), but also provides a bullet-proof framework for handling side-effects in programs.
I could be totally crazy, but I reckon many years from now, we'll joke about mutating state the same way we joke about de-referencing raw pointers.
> I get where you're coming from but this is the end state as far as I'm concerned. Pure FP is where we're all headed. I am convinced the more we try to make mutable state, and concurrency safe and correct, the more FP concepts will leech into future languages.
Having been around the block to catch a couple "FP will rule the world" cycles I can say this is likely untrue. While FP is useful and I personally enjoy it, it is not always the most efficient, nor is it the most clear. This is true in your example of Rust as well. Functional constructs are often slower than their procedural counterparts. For example, having to pass around immutable data structures becomes pretty memory intense even with a highly developed GC.
> I could be totally crazy, but I reckon many years from now, we'll joke about mutating state the same way we joke about de-referencing raw pointers.
Dereferencing raw pointers has been joked about as long as I've been in the industry, and probably even joked about in the 70s. We still dereference pointers today. Similar, state is a very natural thing to reason about. Sure, you can argue as many FP fans do that stateful programs can be rewritten with pure functions. I have yet to see FP code that does this that doesn't negatively effect readability. Even something as simple as large map/reduce/filter chains can quickly become extremely difficult to debug when compared to a very simple loop.
All this to be said there's a lot of benefit many languages can take from FP paradigms. Map, reduce, etc are great examples. Offering immutable state, and algebraic data types could also be beneficial especially in the areas of concurrent and parallel programming. In my professional experience the problems usually start when you begin talking about pure functions which in theory are awesome but sometimes don't map to a problem domain well, or become extremely hard for Joe Developer to get used to. Often times I will think in a functional way, but rewrite things into a procedural way, because communicating your idea is often just as important as the code you write.
It's not "if you're used to procedural programming" - FP is simply hard for people to reason about. You can often very intuitively reason about imperative programs, while FP always requires putting your abstract thinker cap on. Look at any non-software person describing describing how to do something and see how often that sounds like an imperative program vs an FP one. Hell, most algorithms papers are themselves using imperative pseudo-code, not FP pseudo-code (unless they're specifically describing novel algorithms important for FP of course).
> I am convinced the more we try to make mutable state, and concurrency safe and correct, the more FP concepts will leech into future languages.
Concurrent mutation is fundamentally hard regardless of what paradigm you chose to implement it in. Parallelism can be made safer by FP, but parallelism is actually pretty easy in every paradigm, with even a little bit of care. And concurrent updates of mutable state are just a fundamental requirement of many computer programs - you can try to wrap them in monads to make them more explicit, but you can't eliminate them from the business requirements. The real world is fundamentally stateful*, and much of our software has to interact with that statefullness.
* well, at least it appear so classically - apparently QM is essentially stateless outside its interactions with classical objects (the Born rule), but that's a different discussion.
You can do it but they're much harder than most other languages in the class and not anywhere near representative of the whole class.
My FP journey was: Scheme (different but okay) -> Haskell (wtf) -> Erlang -> Clojure -> Elixir. Nowadays when I reach for an FP it's Clojure or Elixir, mostly based upon the problem at hand.
It's definitely good practice for devs to learn some of the pitfalls that FP prevents and solves, but implementing it on a massive scale front-end application just seems impractical.
Having worked on a large streaming service and considering the author's 3 MONTH struggle after his 40 YEARS experience, I'd estimate that a re-write of our codebase there would have taken our 20 devs over a decade.
What that has done is made a whole generation of developers completely detached of the impact on heap allocation and GC. If web programs are slow and bloated, it's partly because of using nuggets from functional programming just for the sake of it.
I was using Clojure/ClojureScript around the same time. I’ve since worked primarily in TypeScript/JavaScript. I’m sure the FP experience influenced my opinion, but it seems impractical to me not to use FP techniques for a large scale frontend application. The applications I inherited and maintain now, which were certainly not originally implemented with FP techniques, have been gradually becoming much more maintainable as I’m able to eliminate or at least ruthlessly isolate mutations. Not only because the code itself is easier to reason about, but also because it’s easier to reason about dependencies between different parts of the systems—even parts which haven’t changed.
That's true if a team was trying to do so from scratch in js or ts. However React borrows a lot from FP and works at scale. A better example would be Elm.
Today, functional has won so completely that devs don’t even notice. Using classes is almost entirely antipattern. Factories with object literals and Claire’s reign supreme. Everyone prefers map, filter, reduce, etc over manual looping. Const and copying as a default immutability is preferred to mutation. Nobody thinks twice about higher order functions everywhere.
What do you mean?
I think one of the major reasons is because IO doesn't really fit well into the FP paradigm. All the theory and niceties take a second place when you find out that something as simple as "print a line here to console" isn't as simple as you thought it should be.
And I say that as someone who is very much fore “practical FP” — I do think that local mutability is sometimes simply the better fit for a problem.
IMHO, the ideal "future of programming" is like how a lot of game dev has C on the "back-end," and Lua "in-front."
Why one language or paradigm? Mix it up.
The point is to understand that "right tool for the job" and "C++ for everything" are religious positions, too. There might even be a dominant religious position, which likes to style itself as just the disinterested rational viewpoint.
The FP folks might be a somewhat more fervent (crazy mathematical) sect, but there's no getting away from religion, though we can call it by other names (fashion, "best practices", ...).
Pure functions excel at state management and reducing bugs. If there were a reason to make games with a functional language, this is the reason.
The big issues seem to be deterministic performance and resource usage. Garbage collection, lazy evaluation, etc all result in bubbles of weird performance. That doesn’t matter for the overwhelming majority of programs where the human limits in responsiveness is hundreds of milliseconds, but does matter it must scale into single digit milliseconds.
There are functional languages that have this capability, but outside of the partially functional Rust, they are basically unknown.
But yeah, for game development pure FP is probably a bad fit because you need pretty precise control over memory allocation and layout (from what I understand, never actually done any game dev).
reversible computing.
Ever since I read some stuff feynman wrote years ago about computing it seems like quantum computing and/or max energy efficiency would be achieved if information was not destroyed.
And I thought functional programming might be a programming paradigm to support this kind of thing.
the amount of time you're afforded for that exercise better fit neatly inside a very small window of implementation (in other words you're probably not going to do it, or at least do it justice)
>> figuring out which ones fit best to the system being built in terms of constraints
constraints will always get ya. It always ends up to being what is the tool/paradigm that you (and your project manager) are most comfortable with because you'll most likely use it in leu of anything else (especially given you are not the only one that has to be convinced -- engineers don't operate in a vacuum, and they love predictability, ie what they already know)
>> You won't get a very optimal solution by ...
YAGNI. Premature Optimization. KISS.
I am not saying that ^^^ is true, I'm just introducing you to your new conversation buddies for the foreseeable future. People always bring'em along for the conversation.
>>> trying to talk you into a religion
advocacy among unbelievers is always gonna come off like this, especially when the evangelists have dealt with so many naysayers and the apathetic majority. And this is probably the crux of the entire issue. Students' first languages in school are generally imperative languages. They are taught in their youth a path away from functional programming. Which is funny to me because my school was always promoting that their studies were there to help you grow one's intellect and not necessarily fit into some cog of the industry. But, I don't recall much playtime given to functional languages (not as much as they deserved at least).
My point is that it would be nice if FP was the first tool that was given to students that pour out of universities into developer teams. It would be nice if the easy button for a group was FP, not the other way around. Then it would be much easier to beat the other drums (like the testing drum).
Imho, FP has many tangible weaknesses, just a few off the top of my head:
- Immutability is non-intuitive: If I were to ask someone to make an algorithm that lists all the occurrences of a word on a page, they wouldn't intuitively come up with an algorithm that takes a slice of text and a list of occurrences, then returns an extended list, and an linked list with one more occurrence.
- Immutability can cause more problems than solve: If I were to create a graph of friendships between people, chances are that if I were to add a link between A and B, then not only A and B would need to be updated, but everyone who transitively knows A and B (which, since no man is an island would probably mean, everyone). This is highly complex, and probably just as bad as living with mutability.
- FP is not performant: FP code tends to be full of pointers and non-performant data structures like linked lists that have very little memory locality. It also makes optimizations such as updating things in a certain order to avoid recalculation impossible.
- FP has to deal with side effects: The real world has side effects, and your FP code is probably a bit of business logic that responds to a HTTP request, and fires of some other request in turn. These things have unavoidable side effects.
>>> navigating all of the offerings, examining their trade-offs
>> the amount of time you're afforded for that exercise better fit neatly inside a very small window of implementation (in other words you're probably not going to do it, or at least do it justice)
Every project I have worked that has done a gap analysis has executed faster and smoother than those that didn't. A gap analysis being a more rigorous approach to an analysis of tradeoffs, solution capabilities vs requirements/existing software, and sometimes scoring/ranking (often with three recommended options depending on client's choices in tradeoffs).
I love functional programming, but I doubt most companies that sell CRUD apps care about it.
Also: "But many functions have side effects that change the shared global state, giving rise to unexpected consequences. In hardware, that doesn’t happen because the laws of physics curtail what’s possible."
The laws of physics? That's complete waffle. What happens when one device trips a circuit breaker that disables all other devices? What happens when you open the door to let the cat out but the dog gets out as well?
Many compilers warn about potential null problems as well.
> Now, imagine that every time you ran your microwave, your dishwasher’s settings changed from Normal Cycle to Pots and Pans. That, of course, doesn’t happen in the real world, but in software, this kind of thing goes on all the time.
> Let me share an example of how programming is sloppy compared with mathematics. We typically teach new programmers to forget what they learned in math class when they first encounter the statement x = x + 1. In math, this equation has zero solutions. But in most of today’s programming languages, x = x + 1 is not an equation. It is a statement that commands the computer to take the value of x, add one to it, and put it back into a variable called x.
Deja vu! I read exact same arguments 10 years ago. Maybe if FP did reduce the bugs, you'd have some stats and successful projects to back them up.
I worked at a company where FP was heavily used. It didn't magically reduce the number of issues we had to fix. Possibly increased them because of number of things we had to build from scratch. The company is default dead[1], now. Maybe bugs are not a symptom of the paradigm, but how strongly the systems and teams are architectured to prevent them.
> But in most of today’s programming languages, x = x + 1 is not an equation. It is a statement that commands the computer to take the value of x, add one to it, and put it back into a variable called x.
This is on page 1 of every basic programming book when it's explaining how "variable" differs between math class and programming class. I can't for the life of me see what upsets you about it.
Functional languages are inherently difficult to develop applications that require state to change in non-deterministic ways. In fact, I challenge you to develop a first-person shooter in Haskell (Have fun).
There are many types of applications where functional languages are perfect, but there are more that it would be a disaster. To make broad sweeping claims, such as this article, just encourages unnecessary discourse, and shows the ignorance of the author, and his limited understanding of the domain of problems that functional languages will benefit from, and the larger domain of problems that will not benefit from them.
As an employer, I don't hire people for their functional programming skills. If they have them, the better, but we have over 3 millions lines of code in C++, and close to a million in Java. We are not starting over, and new projects will leverage existing code.
https://hackage.haskell.org/package/frag
> There are many types of applications where functional languages are perfect, but there are more that it would be a disaster.
Can you give an example or two of an application in a functional language would be a disaster?
> To make broad sweeping claims, such as this article, just encourages unnecessary discourse, and shows the ignorance of the author, and his limited understanding
No, it's an opposing viewpoint to the popular "right tool for the job" and "take the best from functional, best from imperative, and smash them together".
See "The curse of the excluded middle by Erik Meijer".
> As an employer, I don't hire people for their functional programming skills.
I certainly choose my jobs with language and ecosystem in mind.
Don't know about Haskell but it would be very fun to do it in Lisp.
Performance sold separately.
The immediate value I was able to demonstrate was in testing. Three basic (not too scary) principles can get you a long way:
1. push mutations and side effects as far toward the edges as possible (rather than embedded in every method/procedure)
2. strive for single responsibility functions
3. prefer simple built-in data structures (primarily hashes) over custom objects... at least as much as possible in the inner layers of the system
If these steps are taken, then tests become so much simpler. Most mocking and stubbing needs evaporate, and core logic can be well tested without having to touch the database, the api server, etc. Many of the factories and fixtures go away or at least become much simpler. You get to construct the minimal data structure necessary to feed to a test without caring about all the stuff that you normally would have to populate to satisfy your ORM rules (which should be tested in their own specific tests).
Once devs see this, they often warm up quickly to functional programming. Conversely, the quickest way to get an OOPist to double down and reject any FP is to build complex chains of collection operations which build and pass anonymous functions everywhere. Those things can be done where appropriate, but they don't provide as much early bang for the buck... and they likely prevent FP from getting a foot in the door to that team.
The only real downside is that naming things is hard, and good single responsibility practices result in a lot more functions that need good names.
I've found this is actually one of my biggest problems with functional code as currently written: people seem afraid to just declare a struct/record, in lots of cases where it's obviously the right thing.
If everything is a hash, you've just made all arguments optional and now you've invented a bad type system inside your good type system.
If everything is a tuple (more common in my experience, from reading Haskell), now you know (int, int) is actually a pair of ints, but you've thrown away variable names and nominal typing: is it a 2D vector, a pair of indices into an array, or something else entirely?
Defining custom types is the elegant solution to this: you have `struct range { start: int, end: int }` and now your functions can take a range and everything is great.
I strongly believe that data should live separate from the actions performed on it. I also believe that inheritance is a bad thing as there are other, better means to achieve polymorphism.
I do believe in a data oriented programming where we waste as little CPU cycles as possible and introduce as little abstractions as possible.
It's pretty refreshing to work this way compared to the design pattern madness you see in enterprise applications - but I guess it's not very safe if multiple people are working on this and some don't know what they are doing.
There has to be some middle ground, I think people have been going way overboard with OOP in the last two decades.
The author seems to have good intentions and covers all the talking points a new convert will discover on their own.
However I'm afraid an article like this will do more harm than good in the end. There are too many network effects in play that go against a new paradigm supplanting the mainstream as it is. And the benefits of functional programming pointed out in this article haven't been convincing over the last... many decades. Without large, industry success stories to back it up I'm afraid any amount of evangelism, however good the intention of the author, is going to fall before skeptical minds.
It doesn't help that of the few empirical studies done none have shown any impressive results that hold up these claims. Granted those studies are few and far between and inconclusive at best but that won't stop skeptics from using them as ammunition.
For me the real power of functional programming is that I can use mathematical reasoning on my programs and get results. It's just the way my brain works. I don't think it's superior or better than procedural, imperative programming. And heck there are some problem domains where I can't get away from thinking in a non-functional programming way.
I think the leap to structured programming was an event that is probably only going to happen once in our industry. Aside from advances in multi-core programming, which we've barely seen in the last couple of decades, I wouldn't hold out for functional programming to be the future of the mainstream. What does seem to be happening is that developments in pure functional programming are making their way to the entrenched, imperative, procedural programming languages of the world.
A good talk, Why Isn't Functional Programming the Norm?
When I was going through Functional Programming classes in Haskell, the teacher tried to separate total programming and functional programming.
For instance Rust programs rarely use function composition compared to Haskell. He didn't consider Rust as very good functional programming language for that very reason. But at the same time Rust has good total programming tools like exhaustive checking, Option, Result etc.
Does anyone else try to separate functional programming and total programming?
- All programs are defined for all inputs (exhaustive)
- All programs terminate/coterminate
The former is becoming more common (like your Rust example), but the latter isn't very widespread. For example, most would consider a function like this to be exhaustive, even though it loops forever when both Ints are non-zero:
foo : (Int, Int) -> Int
foo (0, y) = y
foo (x, y) = foo (y, x)
Proving termination is hard, and often undesirable; e.g. we want servers and operating systems to keep running indefinitely. However, co-termination can be quite easy, e.g. if we define a Delay (co)datatype: data Delay t where
Now : t -> Delay t
Later : Delay t -> Delay t
Wrapping recursive calls in 'Later' allows infinite processes, at the cost of some boilerplate (Delay is a monad, if you know what that is): foo : (Int, Int) -> Delay Int
foo (0, y) = Now y
foo (x, y) = Later (foo (y, x))Pure Functional where everything is function composition have more hope of producing a valid mathematical proof for a block of code. CS/Math will favor this over the aspects that get grouped into total programming, which often don't help provability.
"Remember a real engineer doesn't want just a religion about how to solve a problem, like object-oriented or functional or imperative or logic programming. This piece of the problem wants to be a functional program, this piece of the program wants to be imperative, this piece wants to be object-oriented, and guess what, this piece wants to be logic feed based and they all want to work together usefully. And not because of the way the problem is structured, whatever it is. I don't want to think that there's any correct answer to any of those questions. It would be awful bad writing a device driver in a functional language. It would be awfully bad writing anything like a symbolic manipulator in a thing with complicated syntax."
Likewise, a language has a limited amount of space for syntax before you end up a mess. So languages adopt a paradigm, and optimize the language for that paradigm. Functional-style Java is bloated to hell, because Java is built for Object Oriented programming.
Frankly, I'm of the opposite mind of the quote: the middle ground is often worse than either extreme. I'll happily write Object Oriented C#, or Functional Haskell, over a tepid mess of C++.
Out of the Tar Pit - http://curtclifton.net/papers/MoseleyMarks06a.pdf
I agree that functional programming is part of the future. I believe that the relational model is the other part. In this space, imperative programming exists primarily to bootstrap a given FRP domain.
We've built a functional-relational programming approach on top of SQLite using this paper as inspiration. Been using this kind of stuff in production for ~3 years now.
Remember - Your user/application defined functions in SQL do not need to be pure. You can expose your entire domain to SQL and build complete applications there, with the domain data & relational model serving as first-class citizens. With special SQL functions like "exec_sql()" and storing your business rule scripts in the very same database, you can build elegant systems that can be 100% encapsulated within a single .db file.
I don't know if that was intentional but I'm upvoting!
Similar to other takes I've seen in this thread. But isn't it flawed to talk about being "capable of object-oriented programming" when object-oriented programming is itself an ill-defined, flawed idea? (I'm talking C++/Java/C#/textbook OO, not Smalltalk.) I spend a majority of my time in C#, and I'm not really sure I'm doing object-oriented programming either. Learning curves are never linear, but if it were, the curve for OO in C# would look something like:
Level 0: God-classes, god-methods. Puts the entire program in the "Main" method.
Level 1: Most of the logic is in "Main" or other static methods, with some working, mutable data stored in classes. No inheritance.
Level 2: Logic and state are starting to get distributed between classes, but lumpily -- some classes are thousands of lines long and others are anemic. Inheritance is used, badly, as a way to avoid copy/pasting code. Short-sighted inheritance based on superficial similarities, like "Dog : Animal". No clear separation of responsibilities, but "private" is starting to make an appearance. If design patterns are here, they're used arbitrarily. Still lots of mutability around. This is "OO Programming" as taught in early textbooks. It's bad.
Level 3: Methods and classes are starting to ask for contracts/abstractions instead of implementations. Inheritance hierarchies are getting smaller, include abstract classes, and are starting to be organized by need and functionality, rather than by superficial similarity; things like "TextNode : Node". Classes are clearly articulating their public surface vs. private details, with logic behind which is which. Generics are used, but mostly just with the built-in libraries (e.g. IEnumerable<T>). Design patterns are used correctly. Mutability is still everywhere. If interfaces are used, it's in that superficial enterprisey way that makes people hate interfaces: "Foo : IFoo", "Bar : IBar", for no discernable reason. This is "OO Programming" as taught in higher-level textbooks.
Level 4: No more "Dog : Animal". If inheritance is used at all, it's 1 layer deep (not counting Object), and the top layer is abstract. Code de-duplication is done via composition, not inheritance. Fluent/LINQ methods like .Select() [map] and .Where() [filter] have mostly replaced explicit loops. A large percentage of the code is "library" code -- new data structures and services for downstream use. Generics are everywhere, and not just with standard-library classes. Interfaces are defined by the needs of their consumers, not by their implementations -- you may not even see an implementation of an interface in the same project it's defined (this is a code-fragrance; a good smell!). Liberal use of Func<> and Action<> has eliminated almost all of the explicit design patterns and superficial inheritance that used to exist. Mutable state is starting to be contained and managed, perhaps via reactive programming or by limiting the sharing of mutable objects. This doesn't look much like OO as taught in textbooks.
Level 5: Almost all code is library ("upstream") code, with a clear, acyclic dependency graph. Inheritance is virtually absent; an abstract class may show up occasionally, but only because it hasn't been replaced with something better yet. Most code is declarative using fluent/functional-style methods on immutable data structures, like .Select() and .Where(). Where Level 4 may have abandoned that style at the limit of the "out of the box" data structures, Level 5 just writes their own immutable fluent/functional data structures when they need to. This means heavy use of interfaces, Func, and generics, including co- and contra-variance. It also means adapting ideas from the functional world, such as Monoids and the "monadic style" (but not an explicit Monad type, both due to the lack of higher-kinded types and due to the fact that Monad is a red-herring abstraction that is not useful on its own). Most code looks like it's written in a mini domain-specific language, whose output is not a result, but a plan (i.e. lots of lazy evaluation, but with sensible domain-specific data structures, not with raw language elements like LISP). Data is largely organized via relational concepts (see: Out of the Tar Pit), regardless of the underlying storage layer. Identity and state are separated. Data and function blend seamlessly. Mutability is almost exclusively relegated to the internals of an algorithm, mostly in said data structures. Virtually no mutable state is shared unless it's intrinsically necessary. If it is necessary, it's tightly controlled via reactive programming or something similar. A few performance-critical loops look almost like C, with their own memory models, bit twiddling, and other optimizations, but these are completely internal, private details, well commented and thoroughly tested. This looks nothing like OO Programming as taught in textbooks. It looks a lot more like functional programming (with some procedural sprinkled in) than OO.
If there's a Level 6, I'm not there yet, nor have I seen it (or known what I was looking at if I did).
So when I see someone say "programmers aren't doing OO programming", I don't know what that means. Only Levels 2 and 3 above look much like "object-oriented programming". If nobody told you C# was supposedly an "object-oriented language", and all you saw was Level 5 code, would you know OO was supposed to be the overriding paradigm?
Are people avoiding OO programming because they can't do it, or because they evolved past it? To someone stuck at Level 3, Level 5 code might look unnecessary, overly complicated, whatever. It might look like code written by someone who doesn't know how to do OO.
This is demonstrably false, as C has always had a goto and its use by custom and in practice is greatly circumscribed.
Why did the GOTO statement fall out of favor with programmers? If you look at Knuth's famous article weighting the importance of GOTO (https://pic.plover.com/knuth-GOTO.pdf) you can see many calculations where the GOTO statement can save you a tiny bit of runtime. Today, these matter far less than all of the other optimizations that your compiler can do (e.g. loop unrolling, inserting SIMD instructions etc). Similarly, in some domains the optimizations that functional compilers can do matter more than the memory savings mutation could bring.
Personally, I believe that with in the next decades memory usage will matter more, but even then functional programming languages can do well if they can mutate values that only they reference(https://www.microsoft.com/en-us/research/uploads/prod/2020/1...). This does not break the benefits of immutability, as other program parts can not observe this mutation.
I disagree with the articles premise that it is "hard to learn".. it might be today but it doesn't have to be. Monads are usually difficult for beginners, but algebraic effects are almost as powerful while being much simpler. They have slowly become mainstream (and might even make it into WASM!). It is an exciting time for functional languages and many people are working to make them even better!
I don't know why that would be controversial. There's a very clear distinction between (MLs, OCaml, Scala, F#) and (Haskell, Elm, PureScript, etc.).
"Functional programming also requires that data be immutable" Not true
Functional programming is great, but it's far from optimal in many situations. For example, implement a self-balancing binary search tree using a common imperative language with mutability. Then try implementing it again but using pure functional programming. Certainly very possible but also requires a lot more work when you're not allowed mutability.
I'd argue that FP is actually great for working with tree-like data structures. For example, the common implementations for sets and maps are based on balanced trees. IME it's graph-like or array-based stuff where the paradigm struggles.
Without the safety of having that encoded in the type system.
Many other algorithms are much harder, though, especially those requiring fast array indexing (graph search, hash tables, ...)
Interested why this is downvoted .. Hashmaps can have O(1) time complexity for lookups and inserts in the imperative world, in pure FP either lookup or insert can be no better than O(log(n))
A number of years ago, we worked with a startup that was based around a new FP language, focused on image processing pipelines[0]. It’s actually quite cool. We came from a C++ background.
Learning the language was difficult, but our team was very capable, and very experienced. We did it.
But it was just too limited, and the advantages never appeared for us. We were doing it for an embedded implementation.
It was a really neat experience, but ended up as a failure. I am sorry for that, as I actually thought they had the right idea, and I think that management failure was as much to blame as technical hurdles. The language had many limitations, but we were still able to work with it. That’s what you get, with a highly capable team. The startup we worked with, had some real rockstars.
These days, I program in Swift, which has many FP features. I enjoy it.
Nonetheless, I think that many of these “new paradigms” are built around the premise that most programmers suck, and need to be forced to write good code, which never seems to work.
Companies seem to be desperate to hire crappy engineers, and get them to write good code, as opposed to hiring decent engineers, in the first place, who can write good code, regardless of the tools.
I think FP came and showed everyone how to design expressive types, how to define flatMap on more than arrays, and that’s it. It turns out you don’t need Haskell, you can incorporate those features in imperative languages like Rust and Swift.
Written about Scala, but applies to any language: https://www.lihaoyi.com/post/StrategicScalaStylePracticalTyp...
In doing this, program organization starts to change in interesting ways. Modeling error states as regular data cleans up a lot of complexity.
Dependent types, allow a lot more type safety (ex. shader program type parametrized by description of its uniform variables, getting rid of `INVALID_OPERATION` on wrong uniform location/type).
> you can incorporate those features in imperative languages like Rust and Swift
Incorporating dependent types into imperative languages with unrestricted effects is hard (impossible?).
The article doesn't even mention OCaml, the second most popular ML-derived language on Github. Makes me suspect the article is not very well researched.
As a fun challenge, look at the definition of pretty much any fractal. It is a mathematical construct that is almost certainly not equivalent to what most "functional programming" environments let you do. Indeed, most definitions are imperative by nature, that I recall, and yet they work remarkably well.
Really, anything from the book Turtle Geometry would have a challenging time in a lot of functional languages. Which is not that most functional languages are bad. Just they don't usually even try to abstract over the graphical. I hate that folks see how well the abstract over functions and assume that is all programming is.
https://github.com/sergv/turtle-geometry
Is an implementation of the book Turtle Geometry in Scheme. A Lisp dialect.
> Which is not that most functional languages are bad. Just they don't usually even try to abstract over the graphical. I hate that folks see how well the abstract over functions and assume that is all programming is.
There is an entire section of SICP dedicated to graphical abstraction using functions and function composition.
It’s hard to learn
Which is refreshing to see just stated up front: FP is for smart people who have some motivation to learn something hard, even when there's a whole world of alternatives that are not "hard" to learn. in this writer's case, it appears to be they own a company and they've mandated everything be written in Haskell or PureScript, which will select for employees that are willing / able to do that, etc.as long as humans are employed to write the code directly, "hard to learn" is a non starter for being the "future".
> "Nearly all modern programming languages have some form of null references, shared global state, and functions with side effects..."
Which is to say, code is organized into discrete classes, instantiated as objects, but those objects only use the functional paradigm with respect to their bound functions, i.e. no side effects, no shared global state. Some sort of input validation and screening can be used with each to sanitize values and avoid null references. Then you have a collection of discrete modular elements which can be reasoned about or debugged independently.
Such classes would be essentially 'stateless' but you could have other classes that stored mutable state and were queried by the functional types, much like the application-database model:
> "The trend has been to keep stateless application logic separate from state management (databases): not putting application logic in the database and not putting persistent state in the application. As people in the functional programming community like to joke, “We believe in the separation of Church and state”"
https://ebrary.net/65011/computer_science/separation_applica...
It's too complex and loses a lot:
> Contemporary imperative languages could continue the ongoing trend, embrace closures, and try to limit mutation and other side effects. Unfortunately, just as "mostly secure" does not work, "mostly functional" does not work either. Instead, developers should seriously consider a completely fundamentalist option as well: embrace pure lazy functional programming with all effects explicitly surfaced in the type system using monads.
https://m-cacm.acm.org/magazines/2014/6/175179-the-curse-of-...
Optional/Maybe values are tedious if we don't have functions/methods like map, flatMap, etc. which take functions as arguments. That requires first-class functions, which pushes things further in the direction of FP.
I think of things on a spectrum, e.g. more-functional/less-functional, rather than having a hard cutoff of "this is FP" or "this isn't FP".
In my experience from University CS education and later on in industry, a quite large group of students or engineers, programmers never grasp the functional way of thinking. It don't just take longer time, it doesn't happen. For this reason I'm skeptical that FP will ever be able to replace imperative (and object oriented) programming.
It's funny, I'm calling cover effect (forgot the idiom, but basically when big medias put you up, your uptick is over) on simple FP. I think mainstream absorbed most of the idioms (map filter reduce, decorators, composability, lazy streams etc) and there's nothing else in that bag to push.
That said, I do believe that FP as an abstract multi stage modeling language is still gonna help in the future because it raises provability which is something that I personally miss every day in most mainstream languages. You're never too sure about anything and it's tiresome.
- Not allowing null-references can be done with any paradigm. For example with object-oriented programming there really is no reason why it wouldn't work to not allow null references.
- Immutability can also be done with every paradigm. The Java String for example is both object-oriented and immutable.
I think the paradigm is actually irrelevant. The real advantage is not gained by using a functional programming language. It is gained by using a language that prevents null-references and makes it easy to write and use immutable data structures.
I can understand wanting to focus on your preferred FP subdisciplines (statically typed purely functional languages) but it seems eliding any mention of this will be confusing the readership since IEEE Spectrum is targeted at a general engineering audience.
>To reap the full benefits of pure functional programming languages, you can’t compromise. You need to use languages that were designed with these principles from the start. Yes and no. F# was designed with functional principles, but you can compromise and you don't have to write 100% functional code.
Link: https://stackoverflow.blog/2020/09/02/if-everyone-hates-it-w...
There's nothing stopping you from writing pure functional code in JavaScript or TypeScript.
- Write pure functions
- Postpone side effects until necessary (as in, set up the side effect in a way that the side effects inputs are testable)
- Return things when possible, in order to increase the expressiveness of your code
It's also important to not fight the language you're working in. If you're constantly breaking idioms and your teammates can't read your code, FP isn't providing any benefit.
https://m-cacm.acm.org/magazines/2014/6/175179-the-curse-of-...
Good oop is only found in a very small amount of libraries.
In don't want "reusable oop code", i want fonctions that returns data. Side effects are impossible to keep track.
Of course you can't use fp everywhere, but oop should not be the default.
A language that can do either is exactly the wrong thing, from the perspective of TFA
like, really?
Keep your programs nicely structured and easy to follow. No code "architecture" nonsense, please.
Rust supports the best traits of both Functional and Imperative languages.
Ergo: The future is Rust. Rust is the future.
Nope: LISP - 1958.
Stopped reading there.
How do I know? A thought leader told me so!
Nothing against functional programming, which certainly has its uses and advantages, but this article is basically substance-free.
I guess it's really an ad for the author's company and book? Considering the shear number of things conflated with functional programming here, I'm not sure it would be worth the time.
― Alan W. Watts
For example, there are hundreds of glowing articles written about Lisp, making all kinds of amazing claims about the superiority of the language and the superior intelligent of people using it. However very little commercially successful software is written in Lisp. Making those claims laughable and not convincing at all.
Less talking more walking!