[1] https://en.wikipedia.org/wiki/Outline_of_algebraic_structure...
commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields ⊃ finite fields
Found here: https://en.wikipedia.org/wiki/Integral_domainI really think names without a fairly obvious way of saying them out-loud should just be avoided/deprecated. I can't pronounce this as "ring" (because then you'd think I was referring to a ring, not a rng). Do I just spell it out? R-N-G? I'd rather just call it a non-unital ring. That involves more letters but is much more descriptive (to a mathematician, anyway).
"If we may speak frankly, the rig and rng nomenclatures are abominations. Nevertheless, you may see them sometimes, but we will speak of them no more."
:)
But 'nearring' doesn't bother you? If I'm at a talk on rngs where the distinction from rings is important, then I'll know that's the kind of talk that I'm attending, and hear accordingly; or else the speaker will make a huge deal of it (and then probably not use the word).
If it's so important to have a "rng" instead of a "ring" then using its name in some example of its usage should probably be better.
I'm finding category theory to be almost a theory about program structure in the context of composition. It's giving me a whole new perspective on one of the least concrete things about programming namely design.
How relevant is abstract algebra to programming? Will it change my perspective on everything related to programming? How much of a mind bender is it compared to category theory?
I would consider myself a fairly expert Haskell programmer and I have (for fun/curiousity) spend some time reading up on/studying category theory and I can say, without a doubt or hesitation that if your goal is to either 1) understand Haskell better and/or 2) become better at writing Haskell, then studying category is a MAJOR waste of your time. I would advise you to spend that time instead on reading up on the lambda calculus, type theory, and some basic algebra (like this post).
Haskell is not/has never been based on category theory (and I keep being baffled by how many people on social media will claim that it is, given how well documented it's origins are) and the common terminology of Functor/Monad that have been pilfered from CT via Wadler have only a passing resemblance/relation to their CT friends.
Some things that I instead would recommend reading up on are: - Type theory (Benjamin Pierce's "Types and Programming Languages" is the de facto introduction to this. It covers everything from untyped lambda calculus to things way more complex than standard Haskell, including example implementations of type checker, etc.) - Computer assisted proofs/formal verification of programs (the Software Foundations book series, co-authored by Pierce are a good (and free!) intro: https://softwarefoundations.cis.upenn.edu/) - The Spineless Tagless G-machine (if you are a more low level/C minded person, this talks about how we compile a lazy functiona language like Haskell to an Intel CPU: http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.37...) - The Typeclassopedia (which talks about how various CT inspired classes relate to each other and their laws: https://wiki.haskell.org/Typeclassopedia)
EDIT: All of the above is not to say that you shouldn't learn category theory, but that you should have realistic reasons/expectations (even if that reason is just "I'm curious and it's cool"). I just hate seeing people get burned out trying to "get" category theory and (as a result) deciding Haskell must not be for them...
- Type theory (Benjamin Pierce's "Types and Programming Languages" is the de facto introduction to this. It covers everything from untyped lambda calculus to things way more complex than standard Haskell, including example implementations of type checker, etc.)
- Computer assisted proofs/formal verification of programs (the Software Foundations book series, co-authored by Pierce are a good (and free!) intro: https://softwarefoundations.cis.upenn.edu/)
- The Spineless Tagless G-machine (if you are a more low level/C minded person, this talks about how we compile a lazy functiona language like Haskell to an Intel CPU: http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.37...)
- The Typeclassopedia (which talks about how various CT inspired classes relate to each other and their laws: https://wiki.haskell.org/Typeclassopedia)
I have to say though I can't agree with you on category theory yet. Especially the part about functors. The fmap for the functor f looks to me to be 100 percent the definition of the morhphism that maps arbitrary categories to the morphisms in functor f. The resource I am reading on category theory makes it seem highly relevant to Haskell and software design in general. I'm curious as to your opinion on why the functor in Haskell only has a passing relationship to functors in category theory.
What is it called when this is generalized? E.g. call + op1, call * op2, call ^ op3. What would op0 be? And what would op0.5 be?
How does the unit element for these operations behave?
And the rules for associativity, commutativity, for increasing order of the operation?
There's a number known as Graham's number [1] which is defined in terms of up-arrow notation and was for a while the largest specific positive integer to have been used in a mathematical proof.
[0] https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation
This might sound harsh, but unfortunately it does tend to attract 'cranks'. I think the reason for this is that there's a clear pattern (as you picked up on), that doesn't require formal mathematical training to spot. Amateurs get excited about the prospect of discovering something `new', without realising how hard it is to say anything deep about the topic.
https://www.amazon.com/First-Course-Abstract-Algebra-7th/dp/... (the 1-star reviews apply to the Kindle version, not the contents. Just get the paperback and you'll be fine)
If you want a CS approach I suggest learning basic Haskell then tackling the fantastic Typeclassopedia. The downside is you'll be missing on structures/theorems that are super useful in math but not that useful in programming.
There's also an old textbook by Birkhoff & Bartee called 'Modern Applied Algebra' that i like. It's not super duper computer science-y but might be worth browsing through the table of contents at least.
Edit: I snapped some photos of the table of contents since you can't see it on the amazon page: