"The ideas", in my view:
Monoid = units that can be joined together
Functor = context for running a single-input function
Applicative = context for multi-input functions
Monad = context for sequence-dependent operations
Lifting = converting from one context to another
Sum type = something is either A or B or C..
Product type = a record = something is both A and B and C
Partial application = defaulting an argument to a function
Currying = passing some arguments later = rephrasing a function to return a functions of n-1 arguments when given 1, st. the final function will compute the desired result
EDIT: Context = compiler information that changes how the program will be interpreted (, executed, compiled,...)
Eg., context = run in the future, run across a list, redirect the i/o, ...
Demonstrating by example what e.g. currying actually looks like is much more powerful, at least from my point of view. In that regard, I’m actually pleasantly surprised this guide does a very good job at that.
I think in many cases this isnt right, eg., Monads. The reason flatMap() "flattens" is just that "flattening" is really just sequencing, denesting the type using a function requires a sequenced function call: f(g(..))
This applies to many of these "functional design patterns"... theyre just ways of expressing often trivial ideas (such as sequencing) under some constraints.
Good sample code, though. It only required me to look up a handful of things I wasn't already familiar with.
If types A, B and C have respectively a, b and c distinct values, their sum type has a+b+c values (each value of each type is allowed) and their product type has abc values (the part of the value that belongs to each type can be chosen independently).
Eg., `+` is polymorphic, `1 + 1` and `"1" + "1"`... the context is whatever implicitly resolves `+` to `ADD` or to `CONCAT`.
The utility of these ideas is, in practice, just they provide a syntax for different kinds of polymorphism. Eg., for monads, we're basically just telling the compiler to change its implementation of `;`
What does it mean to join together units?
What is a context?
What is a single input function? A function with a single parameter?
Does multi input mean multiple parameters?
No a functor is a "function" over types that can transport the arrows:
(A -> B) -> (F A -> F B) for a covariant functor
(A -> B) -> (F B -> F A) for a contravariant functor
With the sum and product there is also the exponential:
B^A = functions from A to B
"If I have a function A -> B, how could I possibly get something that goes in the direction F B -> F A out of it?"
The answer to this is: given e.g. a function that accepts a B, i.e. `B -> ...` you can compose it with `A -> B` on the _input side_, to get a function that accepts an A, i.e. `A -> B` (+) `B -> ...` = `A -> ...`.
Once you've managed to get that across, you can start talking about contravariant functors. But expecting people to just intuit that from the condensed type signature is pedagogical nonsense.
// Int -> Float
oneArgFn(x) = x + 1.1
// List of Int -> List of Float
ctxOneArgFn(xs) = ListFunctor(oneArgFn, xs) //ie., map()Oh ok. Plain english.
A monad is just a monoid in the category of endofunctors.
A monad is a special kind of a functor. A functor F takes each type T and maps it to a new type FT. A burrito is like a functor: it takes a type, like meat or beans, and turns it into a new type, like beef burrito or bean burrito.
An endofunctor is the category containing monoids such as the monad.*
*This is probably wrong. Please don't explain.
I note, in passing, that the actual guide is just titled "Functional Programming Jargon". It does not claim to be "in plain English".
You can buy the book or read it on GitHub:
Even if you've read the definition of functor first
> Lifting is when you take a value and put it into an object like a functor. If you lift a function into an Applicative Functor then you can make it work on values that are also in that functor.
Is a pretty rough sentence for someone not familiar. I think Elm does a pretty good job of exposing functional features without falling into using these terms for them, and by simplifying it all.
It does pay for that in terms of missing a lot of the more powerful functional features in the name of keeping it simple, but I do think it makes it a great entry-point to get the basics, especially with how good the errors are, which is very valuable when you are learning.
I know it's a controversial language on HN to some extent (I certainly have my own issues with it shakes fist at CSS custom properties issue), but I genuinely think it's a great inroad to functional programming.
But then again, I don't think the language maintainers et. al. are too concerned with widespread success. Just some observations, but the majority of people I know that actively use FP languages, are academics. I've encountered some companies that have actively gone with a FP language for their main one - but some have reverted, I guess due to the difficulty of hiring.
With that said - functional elements are becoming more common in widespread languages, but not all the way.
That's the math experience. I know people hate that and would much rather it be a matter of reading some a nice explanation and then "aha" but there's no such explanation yet and after years of people trying to develop one there's no point in expecting one right around the corner.
You might say "but it will make it more confusing for people who already know the FP terms", but those people are a tiny minority of programmers so it doesn't make sense to cater to them. At least if you want your language to be popular among anyone except academics.
> A homomorphism is just a structure preserving map. In fact, a functor is just a homomorphism between categories as it preserves the original category's structure under the mapping.
How about removing the "just":
> A homomorphism is a structure preserving map. A functor is a homomorphism between categories as it preserves the original category's structure under the mapping.
Much clearer. Although most readers would now ask what "structure" and "structure preserving" means, since this is never explained.
For suc a reader, "a homomorphism is just a structure preserving map" makes it clear that "homomorphism" and "structure-preserving map" can be used interchangably, and that by understanding one of the concepts, you'll immediately understand the other as well.
When you got rid of the word "just", you got rid of this connotation and changed the meaning of the sentences.
E.g. the sentence "a functional is a linear transformation" is correct; but not all linear maps are functionals, so writing "a functional is just a linear transformation" would be plain wrong in a mathematical setting.
The word "easy" is another example: saying "it is easy to show X" (just?) means that X can be derived from the already stated theorems in a more-or-less mechanical way without having to introduce new concepts. It does not in any way suggest that deriving this will be "easy" for a student reading the paper.
(Of course the most challenging "easy" parts are best left as an exercise for the reader anyway...)
Based on the author's public profile, I'm not convinced the author is a mathematician writing "... is just ..." as rigorous formalizations for a math-trained audience.
Instead, the "is just" phrases are innocently slipping into explanations as a subconscious verbal tic caused by the The Curse of Knowledge. My previous comment on that phenomenon: https://news.ycombinator.com/item?id=28256522
(Also, as a sidebar followup to your comment, Wikipedia's page about homomorphism (https://en.wikipedia.org/wiki/Homomorphism) has this as the first sentence: "In algebra, a homomorphism is a structure-preserving map ..."
I'm guessing that a hypothetical edit to "a homomorphism is _just_ a structure-preserving map" -- in an attempt to add more refinement and precision to the definition... would be rejected and reverted back by other mathematicians.)
They generally reply in definite statements, "this is undecidable", "this is an example of X", "this can't be done without Y", they rarely say. "X is just Y", they would probably say, "X is Y".
The "just" implies some form of detail that might be missing in the relation generally. Even my mathematical text books of pretty advanced topics rarely used "just".
Again, what you say is most likely true when math people talk amongst themselves but I don't think they do so with other non math people. I was in compsci.
Math students coming from another discipline?
Only some people will understand the technical meaning of "just" as intended
What is needed is "Mathematical English in plain English."
Never thought of that before, but it's certainly true. Always hated when we got cryptic explanations for the difficult things, and elaborate explanations for the stuff everyone understood anyway at the university. I guess professors have to explain things they don't fully understand from time to time
The former is clearer. Adjust your understanding of `just`, it's definition 4a at https://www.britannica.com/dictionary/just
https://github.com/hemanth/functional-programming-jargon/pul...
> In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces).
Not sure if this is rude but many definitions there seem a bit, not plain English.
I am assuming we are aiming for something close to ELI5 if the title says plain English.
Ofc that could not be what the author intended and is just but that's how inferred the title.
I will see if I can find time to improve and send some PRs for the ones that are in deep need of simplification, hopefully OP is open to discussing changes.
Also like Arity, Arity is not just for functions, it can sometimes be interchanged with Rank and apply to even Types. higher ranked types. higher arity types... I do understand they are not something most people like, because of their issues but that's not a complete definition so I assumed it was for the means of simplifying to explain to beginner programmers. [reference to issues with HRTs](https://www.sciencedirect.com/science/article/pii/S016800729...)
Again this is not meant to be rude to the author, just hopefully the title could be better formed to explain the intent of the work. Or my opinion might be minority and we can decide against it as well ofc.
• You have a producer of Cs, then you can turn it into a producer of Ds by post-processing its output with a function g: C → D.
• You have a consumer of Bs, then you can turn it into a consumer of As by pre-processing its input with a function f: A → B.
• You have something that consumes Bs and produces Cs, then you can turn it into something that consumes As and produces Ds using two functions f: A → B and g: C → D.
With pictures: http://mez.cl/prodcons.png
If you understand that, you understand functors:
• producer = (covariant) functor;
• consumer = contravariant functor;
• producer-consumer = invariant functor;
• post-processing = map;
• pre-process = contramap;
• pre- and post-processing = xmap (in Scala), invmap (in Haskell);
• defining how the pre- and post-processing works for a given producer or consumer = declaring a typeclass instance.
It doesn't mean that "a functor is a producer", but the mechanics are the same.
When I was a C programmer, I knew OOP could help me
When I was a JavaScript programmer, I knew TypeScript could help me.
I don't know how functional programming can help me, but I'll keep trying to find a reason because people say it can
A proper functional program can start to do very cool things safely: like hot reloading of code. When I'm debugging a Clojurescript app I can have a live running game, and update the physics without even reloading the page. It's all live.
A proper functional program really looks like a series of mappings from a collection of data sources to a collection of data sinks.
The keyword for this is referential transparency: https://www.braveclojure.com/functional-programming/
There are other benefits like composability, designing your programs this way will give you access to algorithms that works otherwise not work with your code. The simplest example is Map, Filter, and Reduce. These functions are by their very nature parallel because a compiler knows that there are no intermediate steps, unlike a for loop.
But I was minimizing state long before that because state makes any program much harder to understand.
Confessions of a Used Programming Language Salesman: Getting the Masses Hooked on Haskell http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72....
It’s quite technical even for experienced programmers.
class A:
def foo(self, x):
# do something
a = A()
foo(1) # self is already "applied".For example, lambdas can be translated to anonymous inner classes.
That's how Java 8 more or less implements them, no?
Comments:
1. It should explain map somewhere before it is used.
2. For the more abstruse and abstract concepts, a comment suggesting why anybody should care about this idea at all would be helpful. E.g., "A is just a name for what [familiar things] X, Y, and Z have in common."
3. It goes off the rails halfway through. E.g. Lift.
> A category in category theory is a collection of objects and morphisms between them.
What is a "morphism"?
I think this is a great starting point though, which could use some expansion.
A category in category theory is a collection of objects and morphisms between them. In programming, typically types act as the objects and functions as morphisms."
Much clearer now...
Definitely saving this for later.
Difficult to understand: likely yes because the myth is not groundless. What "monads" capture is how to combine things with a lot of ceremony: (0) the things that we want to combine are sharing some structure/properies (1) we can inspect the first thing before deciding what the second thing is (2) we can inspect both before deciding what is the resulting combination. What requires a lot of thought is appreciating why "inspect, decide, combine" are unified in a single concept.
Important: indeed, because in Haskell-like languages monads are pervasive and even have syntactic primitives. It's also extremely useful when manipulating concepts or approaching libraries that implement some monadic behaviour (e.g. promises in JS) because the "mental model" is rigorous. If you tell someone a library is a monadic-DSL to express business rules in a specific domain, you're giving them a headstart.
Some final lament: there's a fraction of people who found that disparaging (or over-hyping) the concept was a sure-fire way to yield social gain. Thus, when learning the concept of monads, one situational difficulty that we should not understate is that one has to overcome the peer-pressure from their circle of colleagues/friends. Forging one's understanding and opinions takes more detachment than the typical tech job provides.
You _can_ write Haskell code without understanding what a monad is, but composing and creating these things is going to be a little painful without that understanding.
Additionally, it seems to be a harder concept to grasp than e.g. functors or monoids.
I think this can be partly attributed to many Haskell programmers first being introduced to monads that are less than ideal for understanding the concept.
Shameful plug, I've written some thoughts on this here: https://frogulis.net/writing/async-monad
a = read()
b = read()
print("{b}, {a}")
One of Haskell's original goals was precisely to be lazy-by-default, which necessitated coming up with a way to solve this problem, and monads are the solution they came up with that gave us reasonable ergonomics. From a practical point of view, monads are just types that have reasonable implementations for three simple functions: `pure`, `map`, and `flatten` # lists as monads:
pure 1 = [1] # put a value "inside" the monad
map f, [1, 2, 3] = [f(1), f(2), f(3)] # apply the function to the "inside" of the monad
flatten [[1], [2, 3]] = [1,2,3] # take two "layers" and squish them into one
# also, the simplest, but least useful, way to use functions as monads:
pure 1 = (x -> 1) # putting a value inside a function is just giving you the constant function
map g, f = (x -> g(f(x))) # map is just composition
flatten f = (x -> f(x)(x)) # you squish by returning a new function that performs two nested calls
("reasonable" here largely means "they follow the principle of least surprise in a formal sense")The trick is that, once you know what monads are, you can use them in any language (with varying degrees of support), and you can see instances of them everywhere, and it's an incredibly useful abstraction. Many common patterns, (like appending to a log, reading config, managing state, error handling) can be understood as monads, and compose quite well, so your program becomes one somewhat-complex data type, a handful of somewhat-complex functions that build an abstraction around that data type, and then lots of really small, really simple functions that just touch that abstraction. I have a .class parser written in Scala that exemplifies this general structure, need to put it up somewhere public.
For me the difficulty comes from the formal explanations/definitions, those always manage to confuse me. Result & Option types seem to have something to do with it so I may already have some understanding of the concept. But many explanations containing the word Monad also contain various other abstract mathematical terms. Trying to explain the concept to someone without mathematical background can be tricky.
Why does this come up so often when people talk about Java?
Because beginners are confronted with the IO Monad if they want to write a Hello world program.
Monad is a typeclass that any datatype can implement. Monads have a then or flatmap like function that takes a Monad and a function that operates on the contents of the monad but also returns another monad of the same type which is then combined according to the implementation details of the specific monad that implements the monad typeclass.
In Haskell, you can't write a "Hello world" program without using monads, so you cant really avoid learning about them.
IMHO monads are only really useful in Haskell because it has specific built-in syntax sugar to support them. Without this syntax sugar, they would be very cumbersome to use. So it's not really the monad type per se which is interesting, it is the code style which the syntax sugar enables.
Also, speaking of memes: https://www.youtube.com/watch?v=ADqLBc1vFwI
Even if imperfect and I need to research further, this glossary provides a way to attach new knowledge to existing one.
This project should be exactly 1 (one) file. The readme.md.
LICENSE - There is a license? Why? Someone might steal the text for their own blog post? So what? The license won't stop them.
package.json - to install dozens of packages for... eslint. Just install globally. It's just markdown and code examples. Yarn.lock - ah yeah let's have this SINGLE, NON EXECUTABLE TEXT FILE be opinionated on the javascript package manager I use. Good stuff We have a .gitignore, just to hide the files eslint needs to execute. wow. FUNDING folder - wow we have an ecosystem of stating the funding methods?
This should have never been a github repo. This is a blog post. It's a single, self contained post.
I hate this crap. We have 9 files just to help 1 exist. It's aesthetically offensive.
> LICENSE - There is a license? Why? Someone might steal the text for their own blog post? So what? The license won't stop them.
But it’s still good that author underlined that he don’t want it to be copied. What’s wrong with that?
> package.json - to install dozens of packages for... eslint. Just install globally.
Then other contributors won’t know what version he used, what config he had, he won’t be able to easily recreate it on different computer, etc…
> Yarn.lock - ah yeah let's have this SINGLE, NON EXECUTABLE TEXT FILE be opinionated on the javascript package manager I use.
That’s author choice. Any good argument against it or you will just criticize for the sake of it?
> This should have never been a github repo. This is a blog post. It's a single, self contained post.
It’s a blog post with 270 different revisions, 80 contributors and a bunch of different languages. Show me how to easily do that with a blog post.
> I hate this crap. We have 9 files just to help 1 exist. It's aesthetically offensive
Why number of files is offensive to you? We have a couple tools good at what they do to keep things consistent and organized. Better to have these tools to keep standards than not.
Because functional languages do discourage mutation, they tend to fallback on an implementation of linked lists where the lists are never modified (You can google for Persistent Data Structures for more). Because the list never changes, you can have two lists share a tail, by building on two different heads atop that tail (like a git branch of sorts, minus the merging).
If you want to build a pure language, you can build immutable dynamic-sizeable arrays with a copy-on-write setup. Growing the array keeps building on top of the existing buffer, if you outgrow it or need to actually mutate it, you make a copy.
If you want both purity and actual full-on mutability, e.g. Haskell has Data.Array.MArray, which locks the array behind a mutable context (like IO), so your pure code never sees any mutation.
If you want to make a glossary, at least try to be precise.
Could need a bit of clean up. Some points could need more explanation, simpler examples, and every point could follow the same structure.
But for a beginner, I think it's a pretty starting point.
I especially like the examples given that remove ambiguity in explaining the concepts.
Though I think the closure example doesn't actually show capturing context. It's just a partial function application unless you count the literal '5' as a local variable.
Constant Functor
Object whose map doesn't transform the contents. See Functor
Constant(1).map(n => n + 1)
Ummm… how is this constant? constant.map(...) == constant
Constant functors are only ever useful if you need to thread a simple value through code that asks for an arbitrary functor. I'd say that's rare even for abstract, type-level heavy code.Give me an object oriented language any day. The world is made of state and processes, (modern niche) functional programming goes too far to derecognise the value of state in our mental models
The good thing about functional programming is stressing to avoid side effects in most of the code and keep it localised in certain places…
In functional programming the jargon might be foreign, but they correspond with the math they came from. You also have clear rules and clear situations where you can apply all those things.
In object orientation, however, we have mostly made-up jargon (from patterns and from SOLID), the "rules" for the application of those are completely fuzzy, arbitrary and broken most of the time, and even the categorisation of those is mostly made-up. It's pseudoscientific-mumbo-jumbo compared to Monads and other FP silly-named stuff.
Imperative programming isn't.
OOP is a failed metaphor, unless you use composition, not inheritance, even then, the actual basis for OOP was about the messages between objects, not the internals.
> The world is made of state and processes
No, the world is made of objects that have state and messages (events) between them.
The real problem, though, is that FP doesn’t do anything well. It’s never the fastest method of programming, which means that it needs to excel in some other way for its proponents to be right about it. Is it the most maintainable? Maybe if you have zero side effects but then any paradigm would be in that case. Once you introduce state, it becomes a nightmare to maintain, unlike OOP. It’s certainly not the most readable.
The world is a complicated place with complicated problems. Complicated problems are always solved by splitting them into different sub-problems, each routinely of radically different character than the outer problem, other sub-problems, or its own sub-sub-problems.
So, a language "oriented" for any one of those is a terrible choice for others. To solve any whole problem, your language should support all the styles that might be useful. Often, a sub-problem falls to a style combining a little of this style and a little of that. Styles should not be straitjackets.
Generally, it is better if the language is powerful and expressive enough that support for a style is not built in, but can be provided by a library. Any fancy built-in feature is always an admission that the core language wasn't strong enough to express it in a library.
That is why weak languages so often have fancy core features (e.g. dictionary): without, you would be left with no support. This should make us suspicious of a "match" feature.
A more powerful language lets you pick from variations on the fancy feature in a library or libraries. And, the library might define a new style useful for the sort of problems the library is designed for.
On the other hand, the original Design Patterns book was published in 1994. I'm still waiting for its FP equivalent.
Either define concretely what the world is made of (i.e. particles and what they do), or don't use this sentence. Currently you just say "wow this is so abstract, it's actually <something else that's also abstract>". Turns out neither paradigm has anything to do with real life, they have their own niches and their own place within different contexts.