let persons = names
.map(Person.init)
.filter { $0.isValid }
easier to read than this? var persons: [Person] = []
for name in names {
let person = Person(name: name)
if person.isValid {
persons.append(person)
}
}
I understand and appreciate the value of compact code, but I find the first one harder to read. A lot of inferred/token based coding is harder for me to mentally parse.With a little experience, functional programming is quite straightforward. Practically everything comes down to map, filter, and fold, or whatever they're called in a given language.
So the first example looks to me like two loops (hopefully the compiler does a better job on that), while the second one is obviously one loop.
I am already trying to guess how a compiler would parallelize that...
So what do you actually have to do to make this actually run in parallel? Or do you truly get it automatically?
The second one takes a lot more work. First, I have to read through the code and recognize that this is a loop that accumulates values into a new array. Then I have to pick it apart and see exactly where the accumulation takes place, and how that value is derived. It's not hard by any means, but the first one is far more straightforward.
But the second one is more debuggable than the first, which I think is even more important than readability.
In the first case, you need to rewrite the control structure to even be able to inspect anything:
let allPersons = names.map(Person.init)
{log allPersons[0].name}
{breakpoint}
let persons = allPersons.filter { $0.isValid }
There are lots of data structures in this style of programming that don't have any names. Who knows what kind of data structures map and filter create in order to do their work. Are we allocating 2 arrays? 1? None? In the procedural style, everything that exists in memory has a name.It's also totally unclear what the order of operations is. Are Person.init and $0.isValid alternated? Is the `map` run in full before the `filter` starts? No way to know.
People talk shit about procedural programming, as if it's antiquated. But the core promise of functional programming—that you can stop thinking about the underlying procedures—never seems to fully pan out. So when you inevitably need to start digging under the hood to figure out what your declarative code actually means in practice, you end up having to think procedurally anyway. Now you're thinking procedurally, but your code is declarative, and the runtime is trying as hard as it can to prevent you from knowing what's exactly happening moment to moment.
I think there are specific cases where a declarative interface is the right abstraction. CSS is a good example. But these are narrowly defined domains with relatively clear semantics that get frequent use, so the time it takes to learn the semantics will pay off.
The idea of littering your entire codebase thickly with declarative APIs, each of which has unique control structures that must be understood in order to read code, is not a good approach in my opinion.
This is what Rails is, and it creates a situation where you are captive to your tools: you can do a lot very easily, but you cannot stray far beyond the declarations that your library author overlords have chosen for you, or you quickly find yourself in a space where in order to not shoot yourself in the foot you need to have a huge body of internals in your head.
The first is less likely to require debugging in the first place.
> There are lots of data structures in this style of programming that don't have any names.
So you can only reason about things that have names? Now we know where idiomatic Java comes from.
> Who knows what kind of data structures map and filter create in order to do their work.
In most reasonable implementations, the only data structure being created is the final result (a functorial value in map's case, a sequence in filter's case). For example, in SML:
fun map _ nil = nil
| map f (x :: xs) = f x :: map f xs
fun filter _ nil = nil
| filter p (x :: xs) =
if p x then x :: filter p xs
else filter p xs
> But the core promise of functional programming—that you can stop thinking about the underlying procedures—never seems to fully pan out.Functional programming doesn't promise freedom from procedures. It promises (and delivers) freedom from physical object identities when you only care about logical values.
---
@banachtarski:
Code that's likely to require debugging (say, because it implements tricky algorithms) should be isolated from the rest anyway, regardless of whether your program is written in a functional style or not. Say, in Haskell:
Bad:
filter (\x -> tricky_logic_1) $
map (\x -> tricky_logic_2) $ xs
Good: -- Now trickyFunction1 and trickyFunction2 can be
-- tested in isolation. Or whatever.
trickyFunction1 x = ...
trickyFunction2 x = ...
filter trickyFunction1 (map trickyFunction2 xs) fs = [f(x) for x in xs]
The other commenter's points about being cleaner and less prone to debugging are totally legit. If we can make e.g. list/set comprehensions debuggable, we can probably make other FP idioms debuggable and get the best of both worlds.Kinda reminds me of microservices, actually. Tough to debug, but good in other ways.
Five years ago, I would have found the second form easier to read. These days, not only do I find the first form much clearer but I find the second one a bit smelly because it mutates data.
It's actually surprising how quickly you get used to the first form of code once the language you use supports it.
In this case, as long as append is O(1), i think the imperative version has a big benefit, it avoids building the name size list of persons. If you've got a billion names and 2 valid person objects, the imperative version is a big win. Of course that predicate i mentioned is the right way to go though.
I think the right way to do it is with a fold. But that's not built in, so hard to expect of novices.
I see what you're saying about mutation, i guess i have a higher tolerance for mutating stuff that you have the only reference to. I'm not really sure doing a += [validPerson] is much of a win. (but i think would be the right answer in a fold)
It helps, for me, that I have more of a math background (academically) than CS (dual majors, but strongly preferred the math coursework).
For me, I see three ways to write, as an example, a summation:
n
Σ g(i)
i=1
vs.
seq(1,n).map(g).sum() // or something similar
vs.
for(i = 1; i < n; i++)
sum = g(i)
In my mind, the first is what I see, and is what shows up on paper. The second is what I want to write (and will in any language that lets me). The third is what I often have to write when a language requires that I be more explicit (like C). (1...i).reduce(0) { $0 + g($1) } val persons = names
.map { name => Person(name = name) }
.filter { person => person.isValid } val persons =
names.map(Person.apply).
filter(_.isValid) let persons = [ p | n <- names
, p <- peopleNamed n
, isValid p ]
is even better, if your programming language has comprehensions. filter isValid (map peopleNamed names)At the moment I'm working in Objective-C, and I literally just wrote a loop to filter an array and form a new array with the results like the one in the example - and god I wish there was an as straight-forward, easily understandable functional way in Obj-C to do it like in the Swift example.
I miss these 'nice' things - I don't need the fully functional Haskell package, but I like having some of the nice things Swift has, just because I got used to them and can actually write and read them better, and it is definitely more elegant!
NSArray *filteredArray = [array filteredArrayUsingPredicate:[NSPredicate predicateWithBlock:^BOOL(id object, NSDictionary *bindings) {
return [object test];
}]]; a := #(1 2 3 4 5).
Transcript show: (a printString); cr.
b := a select: [:x | x \\ 2 == 0].
Transcript show: (b printString); cr.
Sadly Objective-C isn't really Smalltalk.For that reason, I really wish Swift had list comprehensions, just because it's the first "slightly functional" exposure most non-developers get if they come from Python.
In the first example, because I know what "map" means, I know that `Person.init` is applied to each name. And then I know that only the valid `Person`s are returned by the `filter` call.
In the second example, I have to understanding the unique logic of the loop block to get to the same conclusion.
I've been shepherding a bunch of front end Javascript tests, and recently had to go through fixing a systemic problem with how we were handling multiple promises.
The broken code didn't look too different from your second example, but the same three people made the same mistake repeatedly, leading to tests that generated empty lists and thus didn't verify anything.
Now, I'm not claiming this is a good pattern of testing. Indeed in all of the straightforward cases I removed the array entirely, and with great relish (especially since it also sped up the tests dramatically). And there are obviously some gaps in their theory of testing that they didn't notice the problem until I pointed it out.
But it did illustrate to me again that there are (alarmingly) a lot of people struggling with basic data manipulation, and if your language supports anything like list comprehensions, I think you should probably get used to using them. It keeps those gaps out, and makes people decompose the problem instead of mashing together a block of conditional code that reads like a Choose Your Own Adventure.
It also depends on the language obviously. In this example, the first example has special tokens for the filter. It doesn't have to be so.
persons = filter isValid (map init names)
to both. Swift just isn't really a functional language in any useful sense.The other thing is that experience brings the ability to track what's going on. The formulation of the answer here is probably new to you. As you get use to this, or Streams for Java, or threading for Clojure, etc, you'll understand it by default.
When it's optional, well, sadness ensues.
https://h4labs.wordpress.com/2016/09/30/functional-swift-usi...
And in cases where a data-deriving loop has so much going on directly in the loop body that it makes map/filter/reduce hard to read, there's very often some refactoring that would improve either version.
Funnily enough, I find exactly the opposite: for me, the functional style is significantly easier to work with when things get crazy. I think this is mostly because you tend to be composing recognisable patterns, which in turn means the only custom code you’re writing is the “interesting” parts, like deciding exactly which data to select or exactly how to combine each pair of elements. With lots of loops and conditionals and early exits, I also have to work out whether the code is really doing what it looks like or whether there are edge cases that work differently, and even the “what it looks like” part can wind up scattered across several places in the code that are some distance apart.
Some of the projects I work on do a lot of quite intricate manipulations of complicated data structures. Earlier incarnations were written in Python, but even there I found myself using a functional style for most of these situations as the code base grew in size and complexity. More recently, for various reasons including that one, I’ve been writing this sort of code in Haskell, a language designed for that programming style and therefore cleaner in both syntax and semantics. IMHO, it would be hard to overstate how much easier the newer code is both to write originally and to read, fix and extend later. Possibly the most striking thing is how much shorter the code is: the functional style combined with a language and libraries designed to support it really is remarkably expressive for data crunching work compared to the “longhand” form of writing out all of the loops and conditionals manually.
persons = [person for person in (Person(name) for name in names) if person.isValid()]
This is Python. (filter (fn[p] (p :valid?))
(map (fn[n] (Person n) names))And incidentally, this would probably be something like (EDIT: made example more realistic):
filter personIsValid (map initPerson names)
in Haskell. Which looks much cleaner than the lisp to me.Often people assume the relatively simple syntax for s-expressions is the syntax of the programming language Lisp. It isn't. Lisp syntax is defined on top of s-expression.
In fact, while I still have issue reasoning about some aspects of functional programming when writing the code - it is magnitudes easier for me to read, maintain, modify, or extend the code.
[person for person in (Person(name) for name in names) if person.isValid()]
or: filter(lambda p: p.isValid, map(Person, names))
or: persons = []
for name in names:
person = Person(name)
if person.isValid():
persons.append(person)Just kidding. The bottomline of this page and talk is that Swift is still not functional. But you can do cool things with it.
(for those who didn't get the joke... ;-)
"Purely functional programming" is writing programs to resemble mathematical functions, with referential transparency and absence of side-effects and so forth. Also know as "what Haskell does."
"Function-oriented programming" is writing programs using functions as your main tool for abstraction, encapsulation, defining interfaces, unit of code division, etc. This part of functional programming is more typified by the Lisp family.
Most languages that are considered functional generally encourage both of these aspects, partly because they work well together. The confusion over definition comes from these two halves getting tangled, and some languages or programmers emphasizing one half over the other.
Non-functional languages that are becoming "more functional" are generally importing features from the "function-oriented" side of things, and adopting the "purely functional" aspect as a best practice convention, if at all. It's probably more accurate to say that they enable a functional style of programming, rather than that the are functional.
I offer a definition of "functional programming" as "based on semantics of lambda calculus".
You see, if a function isn't pure, then it isn't a (mathematical) function. But we tend to overload terms because of their marketability.
In the same way that some companies wanted to overload "open source". The general rule of thumb is that if something is desirable by the market, then it is going to get overloaded either by people not knowing what they are talking about or by sales people.
To me the core is "no side effects" though (for pure FP). It's interesting to see what others consider to be the most salient feature(s).
As a consequence you get referential transparency and thus equational reasoning. But change this definition and the term becomes meaningless.
A compact definition of functional programming is using only expressions, never statements. This leads to the idea that the effect or meaning of every computation must be encoded in its return value.
Where's the Swift proposal for the enhancements? Product and Sum support? Algebraic data types?
The part about "lifting" a type was an 'aha' moment for me and now I understand!*.
I mean, I did this intuitively, but now I know the name of the technique, which is really good.
Thank you, I've learned something new today!
https://issues.scala-lang.org/browse/SI-1338
Another is that I've almost never managed to use recursion in my algorithms because Scala seems to have very limited ability to successfully optimize tail recursive calls.
Another problem is all kinds of unexpected boxing, unboxing, and implicit conversions of collections that I wasn't expecting.
Again - all the language features are there, just in practice it isn't working out for me very well when I try to use them idiomatically. I'm still learning. But I also learned Haskell and the experience was very different - once I figured out the idiomatic way to do something it usually was also well optimized.
Scala is similar to Lisp and other higher-order-but-not-quite-functional languages in that it's littered with unwanted object identities. All you need to do is use the `eq` method to see when two “equal” objects are really not the same.