What is the beautiful monospace font in the pdf here http://brendanfong.com/programmingcats_files/cats4progs-DRAF...?
OMG. That might just be the funniest god-damned thing I've ever seen in my life. Thanks for sharing that!
On a separate note: does anyone know where this footage is originally from? Some WWII movie, I would guess?
Probably one of the best "WWII movies" (it's really only about the final days in the bunker) ever.
Jokes aside, a book like CTM shows both paradigms have their place, and can also co-exist in the same system.
Besides, this paper cleared all my doubts about high-order FP having a lot of (unexploited) potential:
This made me laugh way more than it should have. "Why don't you pattern match my fist all over your faces!"
Alright, maybe I will keep reading this book.
t1xtt, from the txfonts package, freely available
Hitler: What's a monad anyway? No one who understands monads can explain what they are
Underling (hurriedly): A monad is just a monoid in the category of endofunctors
This caused me to choke on my coffee.A helpful analogy can be drawn by comparing two facts: a composition of something with its inverse produces the identity (a.k.a. unity, to use the Latin root), while a composition of something (e.g. a functor) with its "adjoint" (not quite the inverse) produces something similar that is better said in Greek.
Gotta be able to laugh at yourself sometimes.
When previously taught, people would remain afterwards to ask questions, discuss math, and chat. So this was an iterative-improvement formalization of that.
People would gather in front of the blackboards in fluid discussion clusters. Catalyzed by the three instructors and wizzy others, not all having to stay for the entire hour, but drifting off as discussion died away. They could show material they had pruned from the lecture, for want of time. Or got dropped as they ran over. Alternate presentation approaches they had considered, before selecting another. They could be much more interactive. One commented roughly "If I was tutoring someone, I'd never present the material this way". It was a delightful mix of catching the speaker after a talk to ask questions, a professor's office hours, a math major's lounge, an after-talk social, tutoring, an active-learning inverted classroom, hanging out with neighbors in front of the hallway blackboard, chalk clattering and cellphones clicking to snag key insights... It was very very nifty.
So, the book is nice. And lecture notes. And videos. But... the best part isn't there. Perhaps the next iterative improvement is to capture the aftermath on video, and share that too.
And as we look ahead, planning distance-learning and XR tools... maybe something like this is a vision to aspire too. The insane ratio of expertise to people learning is not something one can plausibly replicate in meatspace. But as conversations in front a virtual blackboard gradually become technically feasible, something like this might pay for the cost of it, with transformative impact.
Why are you paying for that nonsense?
https://github.com/zio/zio-prelude
It's a brand new library for Scala that contains reusable mathematical structures. Still based on algebra and category theory, but it expresses them more or less differently than how they've been expressed in Haskell (and similar languages). For example, unlike Haskell (and Scala's own cats and ScalaZ), it doesn't present the "traditional" Functor -> Applicative -> Monad hierarchy. Instead, it presents the mathematical concepts in a more orthogonal and composable way.
One example out of many, you don't have a Monad. You have two distinct structures:
* Covariant functor, with typical map operation `map[A, B](f: A => B): F[A] => F[B]`
* IdentityFlatten which has a flatten operation `flatten[A](ffa: F[F[A]]): F[A]` and an identity element `any: F[Any]`
When combined together (Scala has intersection types), you get something equivalent to the traditional Monad.
The project is in its infancy, so it may still change significantly, though. Look here for more detailed explanation:
https://www.slideshare.net/jdegoes/refactoring-functional-ty...
Btw, this is a problem that the "Selective applicative functor" too aims to alleviate.
You can read more about the inspectability problem (and Selective) at http://eed3si9n.com/selective-functor-in-sbt
The context there is sbt, which is a build tool, but I'm sure the inspectability plays role in many other areas. Note, how he contrasts "Applicative composition" and "Monadic composition".
And yet, the current mathematics relies heavily on the ZFC set theory. Why is that ? (Is that assumption even correct ?)
From what I've learned so far, the set theory suffers from Russel's Paradox[0] (does the set of all sets that does not contain itself, contains itself ?). That's what motivated the formalization of Type Theory and the invention of Type Systems in programming languages.
According to wikipedia[1], some type theories can serve as an alternative to set theory as a foundation of mathematics.
It seems to me that the Category Theory fits the description. So why don't we see a huge "adoption" in math fields ?
Thank you in advance for your clarifications :)
[0] - https://en.wikipedia.org/wiki/Russell%27s_paradox [1] - https://en.wikipedia.org/wiki/Type_theory
There are people working on categorical foundations, but the main reason for lack of broader popularity or drive is that most mathematicians don’t do work that is “foundational” in that sense. For example, if you’re an analyst, you generally don’t care exactly how your real numbers are built (dedekind cuts, Cauchy sequences, etc.). You only care that they satisfy a certain set of properties, you can define functions between them, and that’s about it. Most mathematical reasoning at this level is insensitive to differences in proposed foundations, except “constructive” foundations that don’t have the law of excluded middle (that are unpopular since they make many proofs harder and some impossible).
What is very popular in mathematics is using category theory at a high level; basically every field uses its concepts and notation to some degree at this point. New foundations are only really relevant to this program in that they may make automated proof checking easier, which is also not a widespread practice in mathematics.
In fact, I would suggest that most mathematicians don't care about category theory at all.
The thing to keep in mind is that a mathematical structure can be expressed as instance of any number of more general mathematical structures ("mathematical machinery"). The structures that useful, however, are those that are "illuminating", a somewhat vague criteria but one which generally include a structure facilitates and unifies proofs of important theorems. Wiles' proof of Fermat's Last Theorem showed the value of many forms of this mathematical machinery [1], including category theory.
At the same time, I think things like Godel theory of cutting down proofs and Chaitin's Omega constant give a suggestion that the some number of "important" theories within a given axiomatic system will have long proofs that don't necessarily benefit from the application of a given piece of mathematical machinery, whatever that machinery.
And think that's related to efforts to apply category theory to programming via functional programming. I feel like it's an effective method for certain kinds of problems but that you get a certain problematic "it's the best for everything" evangelism that doesn't ultimately do the approach favors.
[1] https://en.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27s_...
So-called "naive set theory" does, but my understanding is that this paradox (and others like it) sparked a crisis in mathematics that led to the creation of ZFC and others. ZFC is not known to be inconsistent (due to deep and technical reasons, I can't claim positively that it is consistent), and it was designed to avoid the paradoxes that plagued naive set theory.
Russell's type theory was another early contender for a formalism. For whatever reason (I'm not aware of all the details), Zermelo-Fraenkel set theory (later extended with the Axiom of Choice) won the popular mindshare. Personally, I'm more a fan of the Elementary Theory of the Category of Sets (ETCS) [1], which is indeed drawn from category theory. But it's equivalent in deductive power to ZFC, so what you pick mostly only matters if you're doing reverse mathematics. (This is what really answers your question, I think.)
Russell's type theory and modern type theories are distinct (and there is no single "type theory"). I'm led to believe the commonality is primarily with the usage of a hierarchy of universes, so that entities in one unverse can only refer to entities in a lower universe.
[1] "Rethinking set theory", Tom Leinster: https://arxiv.org/abs/1212.6543
That was my understanding as well. Axiomatic Set Theory, with ZFC, doesn't suffer from the same problems as Naive Set Theory, which is why it was developed. Or so I've read.
Set theory is the de facto modern standard because it has straightforward notation and sufficient power to prove the things most mathematicians care about, both algebraically and analytically. Other foundations exist, it's just that most mathematicians work at a level where set theory is more of a communication and notation tool than a foundation for which the nuances matter.
> From what I've learned so far, the set theory suffers from Russel's Paradox[0] (does the set of all sets that does not contain itself, contains itself ?).
No, naive set theory suffers from Russell's Paradox, but ZFC does not. ZFC was defined in order to avoid Russell's Paradox.
> That's what motivated the formalization of Type Theory and the invention of Type Systems in programming languages.
Russell's type theory, yes, which predates ZFC. But this has nothing to do with programming languages, which weren't a thing when this problem was being considered.
As for why category theory isn't used as a foundation - that's not really what category theory's "charter" is about. As user johncolanduoni explained, category theory is more of a framework for describing things with convenient algebraic abstractions than it is a foundation for mathematics. Category theory as practiced today is mostly done at a higher level than the foundational set theory.
1. I thought mathematics was moving more towards a category theory based foundation, as opposed to the set theoretical stuff.
2. Isn't it in part because we have over 120 years of results and proofs that are based on set theory, and people aren't just going to throw that out for something newer? Granted, Category Theory has been around in some form since about the 1940's, but it's still newer than set theory.
3. Russell's Paradox hasn't stopped people from using Set Theory to achieve useful results, and the axiomatization of Set Theory and the advent of ZFC is why (or at least part of why) that was possible, no?
2. ZFC is fully adequate for all the mathematics 99% of professional mathematicians do (and probably more than adequate in terms of strength). Also, all the mathematics 99% of mathematicians do is insensitive to foundational issues, so if you swapped ZFC for another axiomatization of similar strength, no one would notice.
3. I don't believe ZFC was the first to avoid the paradox, but yes, it does not suffer from Russell's Paradox.
Because most math fields work at a higher level, and whether they consider set theory or category theory foundational doesn't matter. For instance, the defining property of ordered pairs is: `(a, b) = (c, d) iff a = c and b = d`. {a, {a,b}} is a (set-theoretic) model for ordered pairs, but it's not the only model. As long as a model exists, the question of "which model you're using" isn't relevant.
Not really. Most working mathematics is done without any particular foundational context (in fact, it's probably done in naive set theory). If you asked a mathematician, they'd tell you they were using ZFC, but only because everyone does; in practice they don't know or care what foundations they're working with.
> From what I've learned so far, the set theory suffers from Russel's Paradox[0] (does the set of all sets that does not contain itself, contains itself ?). That's what motivated the formalization of Type Theory and the invention of Type Systems in programming languages.
Naive set theory suffers from that paradox. ZFC is a minimal "patch" that avoids it (through the rather ugly hack of an axiom schema).
It shouldn't surprise any programmer that most people working in the field would rather use a bodge that lets them use all their existing theorems and avoid having to learn anything new, rather than a new foundational paradigm.
People who care deeply about foundations are working on things like HoTT. But it's just not that important to day-to-day working mathematicians.
ZFC is a bit messy sometimes, but it's actually kind of nice in its simplicity. I am a big fan of TLA+, for example, and it's really kind of beautiful how easily you can define incredibly precise invariants by using the set theory predicate logic.
This is purely secondhand, of course, and I forget almost all of the details. Contemporary set theory is scary scary stuff.
(Of course there is a lot of stuff in set theory that is counter-intuitive when you get to infinite sets, but I still think that's better than what you have with categories where they're just abstract algebraic objects and there isn't any intuition in the first place.)
It's true that you can get away from Russell's paradox in category theory, but that's because categories are not sets and have different properties (there's less you can say about them because they're more general).
Type theories, particularly homotopy type theory, are becoming popular among some mathematicians, but not because they escape Russell's paradox (I'm not sure that they can say anything more about sets than set theory), but because they are more useful when constructing automated proof systems.
Edit: also, category theory is used a lot in very cutting edge math, like algebraic topology.
I don't think that's right. The original invention of type systems in programming had nothing to do with Type Theory. The motivation wasn't from any theory. The point was to tell Fortran whether to treat a value as an integer or a floating-point number, which affected storage format and a bunch of other things. (Before Fortran, there was floating-point arithmetic, but the type was still just a computer word. The programmer had to keep track of what format it was in.)
This is a fantastic "side effect" of the fact that category theory isn't built on any other mathematical knowledge. You don't even need even any arithmetics for that.
Pretty wild ideas, though. Fair chance they'd be more confusing than using more specifically named instances, but solid ideas where the class instance documents that you're using the pattern, instead of describing the preferred interface.
I suppose it might help sometimes when designing an abstraction, to guide you to some nice properties, such as easy composition.
You can write monoid and foldable interfaces in java but in haskell finding the head of a list with a monoid and foldmap is constant time (and the first or last element of a tree is logarithmic) but a java implementation would be linear.
If you haven't read it, take a minute...
Category Theory could be the rosetta stone of many things(this is relevant https://arxiv.org/pdf/0903.0340.pdf).
[0]https://www.azimuthproject.org/azimuth/show/John+Baez [1]https://forum.azimuthproject.org/discussion/1717/welcome-to-... [2]http://math.mit.edu/~dspivak/teaching/sp18/7Sketches.pdf
You are going to have to revive jesus to come up with a functional language that simplifies coding over a procedural one. Especially over an object oriented one.