In functional programming, you typically do not manipulate state directly, instead you compose functions that manipulate that state, into more complex functions. (In Haskell, for example, this composition is often done through monads.) Eventually, you get a function that represents the whole program.
This is somewhat dual to imperative programming, and is kind of a shift of the frame of reference. We could make an analogy with an assembly line in a factory. Imperative programming is like looking standing at the factory floor, every machine (procedure) receives a thing (input), does something with it, and outputs it. Functional programming is like being together with the thing as it is manipulated through the factory, in its own frame of reference. So instead of passing the thing (data) around, you see passing of the machinery that modifies that thing (functions), in the opposite direction.
You use special data structures to deal with that. There's a great book about them, "Purely Functional Data Structures", by Jeff Okasaki. You generally don't get O(1) update like for traditional arrays, but you can usually get O(log n), like you would get for a database table with B-tree indexes.
An example of what parent was talking about is instead of looking over a list 10 times calling different functions that build a new result list each time, you compose those inti one big function that visits each element once.
A bit more detail. In Haskell this is implemented very elegantly with list fusion. If you write
map (\x -> x+1) mylist
You'll map over a list and add one to every element. this allocates a whole new list where all the elements are increased by one [0]. Now let's say we take the output of that map and map over it again, but this time we multiply every element by two: map (\x -> x*2) (map (+1) mylist)
Naively implemented that would copy allocate two lists, one where all the numbers are incremented and another where all the incremented numbers are multiplied by two. Any good dev will know that's performance poison. So the Haskell compiler implements "list fusion" – it sees that you don't use the intermediate list for anything, and rewrites your code so it's equivalent (in semantics and performance) to: map (\x -> (x+1) * 2) mylist)
(For the compiler devs in here, this optimization is commonly known as deforestation.) This leads to huge speedups in many cases. But there's a problem. If elsewhere you use the result of `map (\x -> x+1) mylist` for anything besides mapping over it exactly once, this optimization is impossible! So list fusion has a reputation for being fragile – the compiler has to know with 100% certainty that you aren't going to use the list for anything else, and sometimes innocent-seeming changes (like moving the intermediate map into another module) can break it.The solution the Haskell community finds promising is to be able to give a tag to value, like a type only not quite, that says "I promise to use this value exactly once". If you use it twice or zero times, the compiler will yell at you. The compiler is still on the hook for doing the analysis of what's used once and what's used multiple times, but now the programmer and the compiler can be guaranteed to be on the same page.
As for the other issue mentioned in the original post, of modifying a subelement of a tree: this is a well-known problem and there are many solutions. If the tree is only used once, the same optimization as list fusion can be applied to mutate the list in place (although the "you must use the value only once" restriction doesn't help quite as much as you'd think it would). The more common solution, that doesn't depend on compiler optimization at all, is to use a data structure that supports this efficiently. For example, if you have a tree and you want to change one leaf on the tree, the only thing you really need to copy is the part of the tree that's actually different now - for the unchanged branches, the new tree just has a pointer to branches of the old tree, so they can be (and are) reused without copying. That's why it's very common to see maps implemented in functional languages using trees, instead of hashmaps. With a hashmap, it's much harder to get around the need to copy the whole thing when you just want to change one part.
[0]: Well, it might do that that once it gets around to it, laziness etc., but let's ignore that for now.
I suspect the contortions required to fit your code into that affine type style will likely lead to code that is just as hard to understand and maintain as the equivalent imperative code (though with better static checking, which might be nice).
This still only works if a child has one / a known-ahead-of-time number of parents. If you need to update an object that N objects point to, you need to update all N references.
OCaml does state without monads the way it does for a reason, because working with monads is a massive headache.
There is no such thing as "monadic programing", even in Haskell. The thing making haskellers harp about monads is that the std lib went a bit far in making sure that every fitting data structure implements the monad interface, which includes IO.
For large objects: Persistent data structures with path-copying. Batching small mutations (and using mutation internally in a way that doesn’t leak out). Using the type system somehow to distinguish cases where copying is not necessary (experimental in certain new languages). Structuring your program around a “data store” that is mutable.
For the “references” problem: Explicit references, like by ID. Or mutable “atoms” as in Clojure, which are maybe sort of like the “mediators” in the article.
They work in the small, but don't truly scale.
You know for sure that inner list did _not_ change, because it is immutable.
Someone else might have a reference to an updated hashtable with an updated inner list, but the one you hold will never change—isn’t that the whole idea of immutable data structures?
This is a vital question that virtually every programming language overlooks.
If you are using a mainstream language, you do not know if they were modified.
If you use a language which has a type system to segregate mutable operations from immutable operations, then you know if they were modified.
> They work in the small, but don't truly scale.
Go even bigger then. Work with the kind of data that won't fit in memory. Put your data on disk - you can't insert into the middle of a 1GB+ file.
I haven't used this crate yet, but if I have a need for persistent data structures I'd definitely try this since it's more accessible.
Yes, you aren't constructing an entirely new 30k byte array on each mutation. But you are still adding a lot of overhead to your program.
Depending on your definition of "way" this is true. A lot of persistent vector-like structures can have quite good performance, but if your runtime is dominated by iteration and random updating, you will still get a speed up by using raw arrays, i.e. guaranteed contiguous blocks of memory.
> This is especially bad if you want the data structure to be shared across multiple threads. Just iterating over a tree now requires taking and releasing oodles of locks.
This is not. The whole point of immutable data structures and concurrency is that iterating over them does not require any locks because the structure won't change! So concurrent reads require absolutely no new code and certainly no locks. For concurrent writes, the usual strategy is a single CAS-like operation at the top-level to swap out the old immutable structure for a new one (e.g. Clojure atoms, Java AtomicReferences, Haskell IORefs, etc.). But that's a single thing for the whole structure, not every level.
In general concurrent update in the functional programming world is handled by CAS not locks.
Um, what?
Persistent data structures don't require locks at all (because they're immutable), in fact using them in a multithreaded way is 100% safe and thus much easier than their mutable (and ordinarily more efficient) counterparts.
In my experience with FP you start with more of a top down approach in contrast to the bottom up approach discussed in the blog post.
There are many good resources on this. I've referenced a few below. Of course there are many more.
[1] https://www.amazon.com/Pearls-Functional-Algorithm-Design-Ri...
[2] https://www.amazon.com/Purely-Functional-Data-Structures-Oka...
[3] https://www.amazon.com/Algorithm-Design-Haskell-Richard-Bird...
[4] https://www.amazon.com/Structure-Interpretation-Computer-Pro...
This might be part of the problem with FP. Every popular CPU architecture is imperative.
Everything is a tradeoff. FP gives you a lot of flexibility in how you trade space and time of the execution of your program. It's also not a magic wand, just a really powerful tool.
Imperative systems are "fitting a square peg in a round hole" by introducing hacks like 'clock signals', 'pipeline flushes', etc.
That's arguable, given that every (deterministic) imperative computation can be expressed as a pure functional program. "Imperative" is more of a viewpoint that the developer is using to understand the behavior of the computer, rather than a property of the machine itself.
A pure functional analysis will describe a computation on that "imperative" architecture as an inmutable, monotonic sequence of all the states that the CPU goes through over time; or maybe, if you know enough PL theory, as a chain of rewritings of a program-long expression that is updated, one term at a time, when each expression is replaced by the result of a function as it consumes its input parameters.
And no viewpoint is inherently better than the other (except that the functional version can be analyzed in a natural way with mathematical tools, which is often difficult with the imperative description).
I highly recommend An Introduction to Functional Programming Through Lambda Calculus by Greg Michaelson.
You can then see how to express Functional Techniques in your language of choice (i.e. one you already know well eg; Functional Programming in C++ by Ivan Cukic) before learning a new Functional Language. This is to avoid the confusion in syntax/semantic expression of unfamiliar concepts which is a bit too much for a beginning student to handle simultaneously.
For example when the author picks "parents" or "children" or in general maybe any entity of any kind, I don't understand what functional thinking gets me. Sure instead of manipulating mutable objects I can think of every entity being 'one thing' at 'one moment in time' and I do a gazillion state transitions that all return new things and I have never mutated anything, but not only is that bad from a performance standpoint, I never understood how that's clearer or easier, it just seems like a tortured metaphor.
Monads and other functional tools are being brought up in this thread as well but I always had the same problem. Sure, I can stuff my operations into a box, say it's the "state box" and then everything is neat but for what purpose? I still remember when I was in university I rewrote a tetris clone I had made into Haskell. It took a lot of time and weird error messages, but at the end of the day translating all my imperative or object like procedures into functional paradigms didn't make the program any easier to understand.
The places where I think functional programming is natural is for something like accounting, data processing or version control systems. Where I really do want to think of every state as its own, immutable little world, where keeping track of changes pays off, and so on. In a lot of domains it to me seems ill-suited.
Using monads and a "state box", it's not enough to make a change to the context of the box, you also have to pass the box onto the next entity/process that wants to change it, or nothing will effectively change (other than burning cpu cycles).
This also means that it suddenly becomes easy to find out what things happened to the box content before, because all you have to do is check what the one who was passing the box to you was doing with it.
If your program is strictly linear than using state boxes is merely syntactic overhead. But as soon as that's not the case anymore, it suddenly becomes important.
I remember very well the time when I had to deal with bugs where I was operating on data that looked different than I expected and I had a hard time to figure out how it came to that.
But even worse are bugs from concurrency and also the "magic" and workarounds for dealing with it when "state boxes" are not available or known to the developer. Now you look at a line of code while knowing that stuff is running somewhere else, impacting what you are working on in a potentially uncontrolled manner.
I find that very unproductive.
- immutability and function application perspective “play nice” with distributed applications, both for distributing work and idempotence
- in a similar vein, the “everything persists, apply functions” perspective works nicely in data science for finance, where you need auditable calculations
- and similarly, but more broadly, if you’re implementing a workflow engine, then the functional perspective is a clear winner: you’re literally writing a program to manipulate and apply functions!
Friends have told me a fourth:
- composition and lenses are nice for UI
You have execution engines (a web framework and sql engine) that encap a block of pure function. There's no side effects (save maybe logging) whatsover inside your mini-program.
FP patterns like monads let you compose blocks of execution so you can have bigger mini-programs that are more ergonomic to write.
SQL query builders are (I think) a kind of Monad. You can compose selections, filters, groups, etc without ever executing, building up a big "what if" into a transaction. No side effects until you execute, it's pure data in data out. Which means for example you can write true unit tests without mutating a "real" db by composing transactions together.
FP falls apart when you have to reason about "nuts and bolts" of actual computation models, IO, network, time, RNGs, DMA, gpus, large data arrays. FP acolytes say "just let the computer/compiler do it" eg write in FP and compile to optimized imperative machine code, but someone still has to write drivers at the end of the day.
it's even more amazing when you enter a territory of multithreading/multi processor.
An other way to think about it is that the programs we write do not literally simulate entities (users, other domain objects like books, students, professors, classes, etc). Our software simulates the record keeping devices that store /protect information regarding these entities. We are most of the time writing information processing systems, not simulation systems and information accumulates, rather than updates in places. Like, e.g. when there's a new president elected, we don't all go out with our erasers and remove all references to change sentences from "The President is Donald Trump" to say "The President is Joe Biden", instead we accumulate new facts without remembering the old.
Like the vast majority of software developers world wide, who just want to get things done and/or earn money.
The simple truth is: A minority prefers purely functional programming. The vast majority prefers procedural programming with functional elements.
For some reason, those who prefer pure functional programming often try to convince everyone else their way is the best way in an obnoxious way.
That is, if you're trying to emulate sharing of mutable structures (i.e. pointers) in a functional language -- that is, reflecting changes to children across multiple parents -- model the pointers explicitly (as integer keys) and put the shared structures in a map, which is necessary to interpret a parent in full.
This seems to be what the article is getting at with its "mediator" concept, albeit in very abstract terms. The author seems to find this distasteful, wanting to "hide" these relationships, but I disagree -- the difference between value and reference semantics is very important; making the difference explicit is a good thing.
Where you write functional/immutable methods and the compiler deterministically can convert it to in-place mutating operations for performance:
https://koka-lang.github.io/koka/doc/book.html#sec-fbip
"With Perceus reuse analysis we can write algorithms that dynamically adapt to use in-place mutation when possible (and use copying when used persistently). Importantly, you can rely on this optimization happening, e.g. see the match patterns and pair them to same-sized constructors in each branch."
"This style of programming leads to a new paradigm that we call FBIP: “functional but in place”. Just like tail-call optimization lets us describe loops in terms of regular function calls, reuse analysis lets us describe in-place mutating imperative algorithms in a purely functional way (and get persistence as well)."The first problem is solved in every functional language I've used (clojure, F#, OCaml) by immutable data structures (aka purely functional data structures, which is also the name of the de facto book on the topic, which is excellent but also you don't need to know any of this to use them. [1]). I'm surprised to see this critique without a mention of this.
The second problem seems to be due to trying to do OO in FP. The thing is you don't have a glyph, you have a whole system which contains glyphs. So you can't do an update on a glyph because it won't update the system.
This is a common frustration when you're used to OO and can be really annoying in FP but it's also part of the design modelling that FP forces you to do: the problem was that he was modeling in the small and not the large.
The solution is not the 1, 2, or 3 from the article (1 and 2 are bad, 3 is not as bad but not FP), but is straightforward. When you want to update a glyph, you use a function like `updateGlyph(pathToGlyph, functionToExecuteOnGlyph)`. That's at the top-level, and it allows you compose your lower level `updateGlyph` functions.
[1] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.64...
Ultimately, is the Big O scale going to be larger for certain cases? Yes. Is it going to matter? Maybe. Maybe not. The point the original comment is making is that the author seems unaware these data structures even exist; the fact he was having O(n) operations and found them to be too slow tells us nothing about the reasonableness of the O(logn) option for his use case. He mentioned using a 30k byte array; that's the difference between 30k operations, and 15. Cutting it from 15 to 1 is probably not going to be that meaningful; certainly not compared with the first jump.
Functional programs are best used when you want referential transparency[1] as your problem domain benefits from it. If you want to represent mutable state in a functional, transparent way you use a subset of temporal logic[2] - each function has an additional state of the world input parameter, and return an additional updated state of the world value with all the mutated objects in it. The State monad is a version of this with increased legibility (as long as you don't combine many different sources of mutation).
If your needs for mutation are limited to a few objects in each module, you represent those as streams (the generalized version of futures which are able to be repeatedly updated with new values); a stream represents the history of all values that a mutable variable takes during the program execution. This is the basis of the functional reactive paradigm[3] on which modern web frameworks like React and Vue are converging, as it happens that modeling asynchronous state mutations of multiple loosely connected elements is easier in the functional paradigm than in the imperative.
[1] https://en.wikipedia.org/wiki/Referential_transparency
[2] https://en.wikipedia.org/wiki/Temporal_logic
[3] https://en.wikipedia.org/wiki/Functional_reactive_programmin...
In a case like this you have several options:
1. Stick with mutation, reuse the original array and don't worry about being purely functional. This will probably be the optimal solution (vice explicitly creating a copy and mutating just the relevant bit). However, if you need the original for some reason then this is a non-solution.
2. Imitate the purely functional style with explicit copies, and accept the massive memory and performance costs that carries. You get to keep the cheaper read cost.
3. Use a better data structure, but losing (in particular) read performance, but largely mitigating the creation and "mutation" costs. (in quotes because the original is, from the perspective of the user, unchanged.)
Assuming you're in a language that doesn't provide (3) for free, and that (1) is a non-solution, you need to profile your system to decide between which of (2) and (3) is appropriate. With a large enough data structure or frequent enough updates, (3) is probably what you will want. If your data structure is small or updates are rare, then you can stick with (2). And if you have no idea which one will occur in the real-world (some users tend towards (2) being better and others towards (3)) you have to make an explicit choice on which one to provide better performance for.
Also interesting to note that modern UI application frameworks such as React/Redux don't mutate state in-place. They tend to generate some kind of mutation object that is "played" against the in-memory stored application state. Probably similar to the FP-kosher solutions suggested here.
For the parent-child problem, typed handles, continuation-based state propagation, and reifying state updates for a declarative interpreter may be used.
Monads and effects are common in strongly typed functional languages, but have less bearing on lisp, which usually embraces dynamism to some extent.
https://clojure.org/reference/data_structures
These are a little slower than conventional data structures (e.g. Java Collections) but are much more efficient than copying the whole structure every time you want to "change" it. The garbage collector picks up behind you so you're not wasting unbounded space on unreachable versions of the data structures.
I hate to be pedantic, and I know this sounds like a no true scotsman argument, but I really feel FP discussions always devolve into this because people have so many different levels of understanding of what FP is supposed to mean.
I think that many people who espouse FP like the "aesthetic" of FP. They like that map/filter/reduce look "cleaner" than nesting ifs, breaks and continues in loops. They love how pattern matching on Discriminated Unions is so much clearer than imperative switch statements or OOP-style polymorphism, where to see the behavior of a single function you need to look through 5 different files. This is the kind of evangelising that stops at "cute little values" and toy examples.
But (subjective opinion here) I think what the more experienced FP proponents are trying to promote are in fact just sane, good programming practices made explicit, but they call it FP too (thus "no true scotsman").
For example, for the OP's bf problem, is copying a 30k byte array for every operation a good idea? Of course not! Is isolating the "stateful" operations (operations that modify this byte array) to small sections of code, possibly even gathering all the modifications that need to be made and only applying them after the function returns, rather than littering stateful operations throughout the code, so that it becomes much easier to figure out when and why some state changes, a good, "FP" idea? Hell yeah! Is changing your data structure to a persistent data structure so you can retain your "op: Buffer -> Buffer" mental model while preserving more performance? Possibly so! (Though I think for such a problem the mutable buffer is the obviously correct data structure; a persistent DS doesn't give much benefit since each operation takes the whole array and returns the whole array before the next operation is executed)
In conclusion I think focusing on the principles of good programming are much more important than focusing on the external appearances of each paradigm. You can architect clean, loosely coupled components with little shared state in OOP as well as in FP. We ought to focus on teaching that.
I think if I can see some code beyond the old map/reduce toy examples it might click. Not wanting to be confrontational, and I know that this forum isn't here specifically to educate me personally, I just genuinely have had problems understanding what FP is "supposed" to be that any good programmer doesn't already do when possible, regardless of language or whatever.
I'm sure it will click and I'll look dumb but yeah. Isn't loose coupling and strong coherence kind of covering this already? Is it just down to doing map/reduce/filter to avoid mutating some variable and instead get a new set of results? Is mutation the charm?
I genuinely really feel I'm missing out on something important here because I have older views on programming, I've struggled to get a good explanation. Do I have to go back to school again? I'm not against that but yeah. I'm sorry if this sounds ignorant, but I kind of am.
I'm not a Haskell programmer but I do know about monads and have heard of lenses.
The same holds true for large structure updates.
The "large-object problem" can be tackled in a principled fashion using strict state threads (aka the ST monad: https://wiki.haskell.org/Monad/ST) or using vanilla mutable (IORef) references.
The "parent-child problem" is well-addressed by lenses, also known as functional references. They are basically composable getters and setters that allow you to read or update deep into nested structures.
My understanding is that lens helps to address the large-object problem.
Anyway, none of them is a problem with the semantics of FP. They are a problem of insufficient optimization. So they will probably get solved at some point.
This would be a fun task to attempt with the new linear features of Haskell https://hackage.haskell.org/package/linear-base-0.1.0/docs/D...
When you assign a field into a data-structure it is bad because later on you will see the changed data-structure and try to use it but it has a wrong value which causes an error in your program - maybe not a crash just an error in the output. And then it's really hard to say what caused that error because you have no visibility into who or where did that erroneous assignment happen.
If instead you could forbid direct assignments and could write a "watcher" (a.k.a "setter") for all data-modifications, you could observe how your program-state evolves during its execution, and you could write assertions that prevent wrong type of data-manipulation from happening.
One such language is Smalltalk. You can only modify data by sending a "message" to an object to be modified, and the object can only respond by executing one of its methods. You can then write assertions in those methods to halt in debugger or write to console if you see the program write a specific value or type of value into your data-structure.
Even on the rather primitive level of Arrays you could easily modify their #at:put: -method in Smalltalk and halt or log the stack-trace if somebody tries to store the number 42 into the array. :-)
In the end the problem is understanding what your program does in terms of its state, how the state evolves. Why? Because: Modifying state in essence modifies your program. And to understand your program you must understand all evolving versions of it. Like, if you want to understand a 'caterpillar' you must also understand its next mutated state 'butterfly'.
FP makes it easier to understand the state-evolution because new state is just the result of some calculation which is easy (?) to understand by understanding each "pure" function in your program and reasoning what their combined result is.
But if you can understand the state-evolution of your program by imperative means that is fine too. That is why many suggest using a database to keep the state. A database is mutable by definition. But it makes it easy to observe how your state evolves by observing the UPDATE and DELETE and INSERT calls made into your database.
I would call it challenge, rather than tension, and I don't think expressing ourselves naturally is the only goal. We also want to express a solution that is tractable, that we can understand and have confidence in. "Natural" imperative programming can become intractable when mutating state; the goal of functional programming is to find a different way that is more tractable for humans, without worrying about if it feels "natural." If there's one thing programming (and mathematics!) has shown us, it's that anything we can do will eventually feel natural if we do it enough.
(We produced multiple generations of programmers for whom C-style for loops were the most natural thing in the world, almost the benchmark for what felt natural in a programming language. The "map" function was weird-ass shit to them. I know because the experienced programmers I worked with called it weird-ass shit when I discovered it in Python as a wee junior in 2000 and wondered why Java and C++ didn't have it.)
I don't think functional programmers consistently write code that is more tractable than imperative code, not yet. (I think some of them aren't trying, honestly.) But I think functional programming languages will evolve to better support that goal. It has only been in the last 10-20 years that the majority of working software engineers have fully recognized the difficulty of managing shared mutable state, generating widespread demand for functional programming languages for practical applications like web services, data processing, user interfaces, and so on. The trade-offs for those applications are different from the trade-offs for academic research, and the trade-offs for most industry programmers are different from the trade-offs for researchers who are immersed in the mathematical foundations. The academics have shaped functional languages for decades, but working software engineers will have more and more influence in the future.
I'm more pessimistic about:
> It has only been in the last 10-20 years that the majority of working software engineers have fully recognized the difficulty of managing shared mutable state
I suspect that the number of practicing programmers who don't worry about shared mutable state is growing much faster than the number who do worry. To quote Uncle Bob: "the industry as a whole will remain dominated by novices, and exist in a state of perpetual immaturity"
[1] https://wiki.tcl-lang.org/page/K#c2a6014c2d129837889d8a8000d...
The course distills the primitives of different programming languages in terms of how they see state (see Van Roy's diagram in [2]), and how deciding to incorporate and combine some of those primitives and not others gives rise to the different programming styles. I found it enlightening, in the way that programmers' koans refer to.
[1] https://www.edx.org/es/course/paradigms-of-computer-programm...
[2] https://en.wikipedia.org/wiki/Programming_paradigm#Overview
(This is a repost of a comment I made below, so that readers who don't read the whole thread can still see it).
I thought OCaml handled this problem with copy-on-write and/or fancy pointer math. Is there more to it?
Either do a nice imperative for-loop as that is an excellent and imperative way of solving the problem (which I don't know what it is.)
Or solve it in any of the normal functional ways. Pick the style that you and your team likes.
http://pepijndevos.nl/2012/08/12/the-multiple-index-problem....
* Use linear / affine / session types / move semantics
* Describe an interface, then weave side effects through it, such as the State Monad.
* Use lenses
Hah. Weird way of approaching it but okay. Here's the thing: it is exactly how computers work at hardware level. If you look at an electronic circuit you and a program written in a functional way, the equivalence is almost 1:1. The circuits are self contained and you compose them to generate a new output from the inputs that are fed in.
In SSA, rather than modelling your program using 16 mutable registers (imperative folk love talking about "the real world" or "real computer"), we use an infinite number of immutable registers.
Infinite registers won't fit in a computer (we only have 16), and not being able to overwrite registers must kill performance right?
It really saddens me that some people completely miss the point of what FP is about.
How did this happen?
1) Uncle Bob's "perpetual state of immaturity" [1]
2) Non-FP languages and "hybrid" languages do a bad enough job of FP to stop people from seeking out more. E.g.
FP lang: Doesn't have null. Uses Maybe in the rare cases its useful. The benefit is not having null. Imp lang: Keeps null. Adds Optional so now there's two ways not to have a value.
FP lang: Functions are lambdas are first class. Imp lang: Separate Function object. Lambdas can't throw checked exceptions or update variables in the method's scope.
FP lang: Understands when mutation can and cannot happen. Can share data and fuse code efficiently. Imp lang: Cannot assume underlying data doesn't change. Programmer needs to make defensive copies.
FP lang: map (+1) [1,2,3] Imp lang: Arrays.asList(1,2,3).stream().map(x -> x + 1).collect(Collectors.toList())
If I grew up with the kind of FP that made it into mainstream languages I'd probably be against it too.
[1] https://blog.cleancoder.com/uncle-bob/2014/06/20/MyLawn.html
So when you operate on a large array (as in his example), a natural approach is to also create and destroy it.
Of course after you gain experience you learn not to do that, and techniques that let you not do that, but I don't think he is missing the point.
Naive implementations will always, by definition, not be great. But instead of spending the time, energy, and money on crafting a custom solution each time, use off-the-shelf libraries that do the job. DataScript, Storm, etc. are great for this.
It solves parent-child problem by having symbols as built-in data type. They can be used exactly as the "mutable mediator" describes. And beyond, such as you can generate a code that uses these symbols and have it natively compiled - at runtime.
I think there might also be another idea to avoid too heavy updating in case of the many many functional updates to the 30K data, but it depends on how Archetype works and what it does. It might be possible to not apply some updates immediately, but group them together with other updates or simplify multiple subsequent updates into one update in clever ways. This might be difficult to implement though and as I said, might not be appropriate for all situations, as you might want to give immediate feedback in the GUI for any change.
Then, I think there are two different aspects of the problem:
- There is the identity of each object in your program (e.g. a key, reference or pointer), regardless of its value.
- And there are the versions of values each object has / had.
Imperative tracks identities but no versions while functional tracks versions but no identities. Ideally the program would run in a VCS so that one could track both versions and identities.
That's not much an FP issue but an actual IT issue, since modern development for modern business reasons produce monsters and solutions waiting for problems. That's also hard to teach and probably the reason functional programming is seen as "superior but not for the real world by many.
Now seriously: while reading this, and especially the first problem, my head was shouting "immutable data structures! immutable data structures!".
The author surely must have encountered IDSs during their 10-year tenure with Lisp. I'm curious about why they thought they would not fit the bill for the problems they are writing about.
Secondly, I did read the article. If they are allocating a big chunk of memory and copying it over every single time they are making a change, they are not using immutable data structures. Ok, technically speaking, they might be. But they are using the worst implementation possible as a base to make conclusions from. That'd be like looking at the C language and concluding that strong types are not good, because of all the limitations C has.
g2 = updateGlyph(g1)
f2 = updateFontInGlyph(f1, g2)There are ways to achieve such "automatic updating" of items in a functional data structure, but you need to build them explicitly instead of relying on the runtime, which gives you more responsibility but also more control. The most direct approach as compared to imperative code would be to build the data structure on top of mutable state-based streams, though there are other possibilities like monotonic or immutable structures.
This is literally atoms in clojure.
"Doctor, it hurts when I do this."
"Well, then, don't do that."
You mean, jamming a foreign paradigm into a language, where the advantages of the foreign paradigm are reduced and the disadvantages greatly magnified, hurts sometimes?Have you considered... not doing that?
I simply can not apprehend the mindset that goes:
1. Wow, map/filter/reduce sure is fun in this language over here
designed to work with it.
2. Now I'm going to use everywhere I go, no matter how much it hurts
and no matter what compromises I have to make.
3. When it hurts, I will blame anything and everything except myself
for forcing a paradigm in where it doesn't belong.
The paradigm has to be good. It just has to! Even if it hurts! Even if I'm really not enjoying using it! Even if it's murdering my runtime and killing my performance and I'm in a language where functions take quite a lot of text rather than \x -> x + 1! Even if my code is hard to read and hard to abstract and hard to refactor. Even if the foreign paradigm is critically based on pervasive laziness my language doesn't have! Or some other foundation my current language totally lacks! Gosh darn it, I'm going to use this paradigm in this code if it kills me! Because it was so much fun... over there.I'm not saying that it isn't sometimes useful to borrow paradigms in certain places even in languages that don't support it. When it doesn't hurt like this, by all means use it as you like. What I don't understand is this constant, if not growing, stream of posts about "my gosh, my forced FP programming in this not-FP language is kinda painful here... grimaces yup, it's pretty painful all right... grits teeth yup, this is really hurting me and making me really angry and isn't fun at all... but I'm so glad I'm using this style that is making things hard and hurting me and making me angry, wouldn't dream of doing anything else!"
I know Haskell. There are many lessons I've taken from Haskell and applied to other languages, like the virtue of very strongly separating IO code from logic, the virtues of taking extra steps to provide and use composability in your APIs, the virtues of certain things being immutable data. But taking these lessons from functional programming languages shouldn't mean you ape the surface solutions functional programming uses; you should be thinking about how to idiomatically bring the target virtues into the other languages you use. There will be compromises, but sometimes there will also be things the current language does better than Haskell. (Really quite a lot of things. Haskell's great but there's many reasons it hasn't conquered the world.) You shouldn't be so focused on importing the foreign paradigm that you completely throw away the virtues of the environment you're in.
I do not understand why so many of you are so vigorously adamant that you're going to program in the intersection of functional programming languages and imperative programming languages. It's an incredibly aggravating place to try to be. The (near) union, applied with the wisdom gleaned from both camps, is much more fun.
>I do not understand why so many of you are so vigorously adamant that you're going to program in the intersection of functional programming languages and imperative programming languages
Because it actually works pretty well? The vast majority of programming languages today are multi-paradigm for a reason. In fact, almost all functional programming languages (Lisp, Scheme, Racket, Clojure, (S)ML, OCaml, F#, Scala) are multi-paradigm. Haskell is really the odd one among functional programming languages. The same applies to lazyness by the way, to claim that functional programming is "critically based on pervasive laziness" is simply wrong.
The idea that people use FP in a language like e.g. JS only because of an ideology driven obsession and that they constantly have to fight against the language just doesn't bear out in reality.
I'm talking about forcing it in when it hurts specifically.
It occurs to me that I may have accidentally answered myself at the end, when I referred to the concept of union vs. intersection. When you bring in helpful angles on your current paradigm from light cast by other paradigms, that's helpful. You're expanding your working set of options.
But when you get a bit of a good taste of those other paradigms, and then declare this is always the right way to do it, it may feel like you're expanding your paradigms, but now you're not. When you force sum types into a language that doesn't support them because they're just always the right answer, when you force lots of maps & filters in a language that can't do fusion and function calls are kind of expensive, when you force an algorithm in that depends on laziness to work, and so on, you are no longer working in a sort of union of paradigms. You're working in the intersection.
And that intersection sucks. A worst-of-both-worlds situation for sure.
You can tell, by the way people trying this are constantly screaming in pain about it.
When it hurts... stop.
* Algebraic data types
* Pattern matching (with exhaustiveness)
* Lambdas / Closures
* Type inference
The closest mainstream project to FRP is Angular using RxJS exclusively. You define how each component interacts with each other, and then you delegate to a runtime to actually perform the side effects (change the DOM in this case). Solutions like React + Redux/Elm involve a lot of plumbing that's not relevant to the issue: I just want to define relationships between components, not manually tread changes across a big state tree.
If not it would be nice to find discussion about when it is and when not and why? What is Haskell NOT the best solution for?
I love functional idioms as well, but you have to know the cost of those idioms in a non-functional language. The first thing I did when I had to learn go was determine if I could use functional idioms (map, fold, etc.) and did some research and testing and found that, no, functional idioms are a really bad idea in go. It just isn't meant to be used that way.
For the record, I had to learn go because the team I joined used it.
Despite what it may superficially sounds like, it's an attempt to reach out and help people. If it hurts, you know, think about it. Maybe don't do something that hurts so much because it "must" be good.
FP techniques in imperative languages is a tool. A good tool sometimes. But it shouldn't be an aspiration.
Why do you want to climb that mountain? Because it’s there.
But could y'all be more clear about labeling at such? You're fooling young programmers into thinking this is a good engineering idea. In engineering, we do not climb mountains just because they are there. Quite the opposite.
Functional programming is great, and most languages nowadays give you the necessary tools like lambdas, closures, list processing, first class functions, etc... but it isn't the right tool for every job. The same languages also typically support object oriented programming with classes, methods, inheritance, encapsulation, etc... Declarative programming is usually not supported by the "main" language but there is often some way to use a secondary language for that, like SQL.
Multiparadigm languages are here for a reason, and trying to use a paradigm that doesn't fit only makes your life harder. It is not a bad thing if you are learning or doing research on the subject, but it is a bad thing if you are trying to be productive.
and no, you don't need to go all in. you can make a judgement call and pick and choose what you want to leverage.
for anything but a trivial service immutability of data as you are processing requests should at a minimum be considered.