Bob Harper (a CMU professor with a focus on PL theory) also has a really good SML reference that is closer to a textbook than lecture notes.
I have not read Dr. Harper's book cover to cover; I treat it as a reference. For that, it is quite useful.
The same criticism of his SML book could also be made of his Practical Foundations for Programming Languages book. It's a massive, dense tome that is probably great for other professors to use as the basis for a course, but (in my opinion) serves as a mediocre introduction if you have never seen the material before.
Some assembly required; use the instructions from https://github.com/jacobneu/alleycat as inspiration, use the scripts (e.g. startLecture) and Makefile in /crucible
For instance, here's the SML code for it:
``` datatype exp =
Num of int
| Plus of exp * exp
| Minus of exp * exp
| Times of exp * exp
| Div of exp * exp
```Implement the function `eval : exp -> int`, which evaluates the expression as best as it can. Assume no division by zero.
Extra credit: Can you implement `eval' : exp -> int option`, that returns `SOME n` if the expression evaluates, and `NONE` if it divides by zero?
It looks like their current workflow keeps exams and homeworks off the internet effectively, but there's a 6-year-old codebase at https://github.com/zhengguan/15150-1 with 10-year-old homeworks and such.
A λ-calculus interpreter can be used as an intermediate level exercise. It is in particularly valuable in the context of solidifying one's understanding of functional programming.
You can also use "standard" textbooks, such as the SICP [0], and perform the exercises using the language of your choice, instead of Scheme/LISP.
[0]: https://mitp-content-server.mit.edu/books/content/sectbyfn/b...
Should have at least differentiated between "functional programming" and "pure functional programming" IMHO.
As other posters have said, strong typing is also a nice property for lots of reasons, most notably it gives a platform to talk about ad-hoc and parametric polymorphism.
(I lecture on Functional Programming at the University of Warwick, where we use Haskell.)
The second is that languages like Lisp, SML, C, Pascal and BASIC all have referentially transparent and referentially opaque positions in exactly the same way that languages like Haskell do.
This means that all these languages enjoy referential transparency in the same way, because when you unpack the notion of equivalence, referential transparency itself is within a whisker of being a tautology: if a is equivalent to b, then you can substitute a for b or b for a. The relevant sense for "is equivalent to" can really only be contextual equivalence, which is all about meaning-preserving substitutability.
That said, not having to reason about effects within one's program equivalence sure makes things simpler in a pedagogical setting. But that's not to do with referential transparency per se.
Scheme (and its dialects/descendants) tend to stick to a more functional style (although they also like to do stack-gymnastics with continuations, etc.). Many courses are based around Lisp.
One of the main features of the ML family is static typing, with algebraic datatypes, pattern-matching, etc. (i.e. the stuff that new languages like to call "modern", because they first saw it in Swift or something). That gives a useful mathematical perspective on code ("denotational semantics", i.e. giving meaning to what's written; as opposed to the common "operational semantics" of what it made my laptop do), and having type checking and inference makes it easier to do generic and higher-order programming (dynamic languages make that trivial in-the-small, but can make large systems painful to implement/debug/maintain/understand). This course seems to take such abstraction seriously, since it covers modules and functors too (which are another big feature of the ML family).
NOTE: In ML, the words "functor" and "applicative functor" tend to mean something very different (generic, interface-based programming) to their use in similar languages like Haskell (mapping functions over data, and sequencing actions together)
The instructor is clear, energetic, has a decent flow in the first lecture e.g. emphasizes the important points with vigor and repetition, switches media every 10 or 15 minutes, has a conversation with students. Slides are relatively noise-free and are informative; terms to know are highlighted.
I haven't watched a full functional programming lecture series yet, but I'd gladly audit this one.
As a "new" instructor (as in, not a TA anymore) he's got a bit of teaching talent. I hope he keeps a tight feedback loop and improves every year and continues publishing material.
i think there are a lot of nicer choice
OCaml , its basically sml only more popular and used more in real life
Haskell , again more popular , and used more in real life
Idris , newer and said to be more progressive
F# , a more practical choice and similar to sml
a lisp , well if you want to focus on the functional part and less on the types partOne such reason is historical. Standard ML is a research language, and a significant amount of work on it was done by professors at Carnegie Mellon, who developed the curriculum for this course.
Even setting that aside though, I fully agree with the choice to teach it in SML. For transparency, I work professionally in OCaml, so I am not unfamiliar with it, and I enjoy it quite a bit. That being said, I think that the approach taken by CMU is best summarized as the fact that languages are ephemeral, and the concepts are what matters. We don't teach programming languages, we teach concepts -- so even if SML is not widely used, the tradeoff for having students have a simpler, less distracting, and better learning experience is well worth it.
OCaml has its own intricacies that make things difficult. For instance, you can go down a lot of rabbit holes with `dune` and `utop` and `ocamlc` and `ocamlopt` and all of these things, versus SML/NJ's simple interactive REPL. Another thing is that the language is just generally more "bloated" -- you can teach modules, but then what if a student starts running into first-class modules, recursive modules, or even beyond that, GADTs and classes and objects?
(as an aside, this is my primary reason for why I would not want to teach an introductory course in Haskell. To do anything, you suddenly need to understand the concept of type classes and lazy evaluation, and that's simply too much. I don't know much about the other languages.)
I think teaching is as much enabling students to succeed as it is to prevent them from shooting themselves in the foot. For an anecdote, there is an `Option.valOf` function (of type `'a option -> 'a`), which essentially is just a bad function that should be avoided where possible. Every semester, without fail, even though we never tell students that function exists, students are smart enough to use Google, and will use it anyways, ultimately harming themselves.
I think that same mentality applies to programming language choice, here. Keep it simple, keep it neat, and make sure that the students see what is necessary for their education, and not have to spend mental energy thinking about much more.
Yes, I agree broadly with the line of thinking that says- if you are learning FP, you must be willing to do these, but a beginner might be disheartened and turned away- especially as someone learning alone.
Every winter break I get back into trying to learn more FP (in Haskell) and in the past several years I have been practicing algo problems (codeforces, advent of code, leetcode).
I always get stuck on more advanced graph algorithms where you traverse a and modify a graph, not a tree structure - it gets particularly tricky to work on circular data structures (I learned about "tying the knot" but it's incredibly challenging for me) and usually the runtime perf is sub-par both asymptotically and empirically.
That said, as a beginner in functional programming, it would probably be good enough if you just focus on directly translating imperative graph algorithms to functional programming. You simply solve the problem by programming at a slightly lower level of abstraction.
[0]: https://dl.acm.org/authorize?N46678 or preprint at https://github.com/snowleopard/alga-paper/releases/download/...
> [...] consider a function that accepts a mutable list, removes the first element from the list, and returns that element. In a purely functional setting, removing an element from the list produces a new and shorter list, but does not update the original one. In order to be useful, therefore, a purely functional version of this function is likely to have to return the new list along with the removed element. In the most general case, a program converted in this way must return the "state" or "store" of the program as an additional result from every function call. Such a program is said to be written in store-passing style.
[0] https://www.cs.cmu.edu/~rwh/students/okasaki.pdf
[1] https://en.wikipedia.org/wiki/Purely_functional_data_structu...
It is language based community but they do have vibrant discussion on learning and theories.
I always dreamt of that when going through my degree and encountering courses that needed a thorough rework.
And if you feel you have to take what you hear with a grain of salt, that's probably good for your learning too.
He took a break from TAing for some time, and when he returned he decided to have some fun with his application. He wrote a transpiler from a C like syntax to SML the day before his interview, and used it to joke that they should transition to teaching functional programming in C.
Regardless I think it's important that students get exposed to more than just Python, which seems to increasingly be the only thing students come out knowing.
My first CS bachelor's semester in Germany in 2017 taught functional programming using Haskell (as well as C and NASM assembly in another course on computer architecture).
OOP using Java and Python was only introduced in the second semester.
> Regardless I think it's important that students get exposed to more than just Python, which seems to increasingly be the only thing students come out knowing.
In my B.Sc. studies I used C, C++, Haskel, Assembly, Java, Python, and Swift.
Also worth mentioning that since this was in Germany not only was it a great education, but there’s no crippling student debt either.
I work in OCaml, which is also a functional language, but prints can be added in single lines. I address this point in Lecture 19 (Imperative Programming), actually, but my perspective is -- we invented immutability and purity to serve us, but we need not be fanatically beholden to it. In my opinion, I think Haskell goes in that direction, when every usage of IO now needs the IO monad to get involved.
A little mutability is OK. Functional programming is about the avoidance of side effects, more than simply forbidding it.
-- Before
captureAudioDuration :: DeviceID -> DiffTime -> IO WaveData
-- After
captureAudioDuration' :: MonadIO m => DeviceID -> DiffTime -> m WaveData
You can build the same thing, but for logging! class Monad m => MonadLog m where
log :: String -> m ()
-- In IO, just log to stdout.
-- Other implementations might be a state/writer monad
-- or a library/application-specific monad for business logic.
instance MonadLog IO where
log msg = putStrLn ("log: " ++ msg)
-- Before: Bad, doesn't actually do any IO but logging
findShortestPath :: Node -> Node -> Graph -> IO [Node]
-- After: Better, type signature gives us more details on what's happening.
-- We can still use this in an IO context because IO has a MonadLog instance.
-- However, trying to capture audio in this function using either
-- of the functions above will lead to a type error.
findShortestPath' :: MonadLog m => Node -> Node -> Graph -> m [Node]
As you can imagine this can get quite verbose and there's other patterns one can use. Feel free to ask any follow-up questions :)Or do an unsafePerformIO.
Or use trace (where someone else has done the unsafePerformIO for you).
Or use a Writer.
Or introduce some logging capability (Logger m =>) onto your code.
Or take a look at all the man-hours that have been spent on trying to perfect logging: https://hackage.haskell.org/packages/tag/logging
In Haskell, you can use Debug.Trace for just that purpose, when you don't want to change the type of your function.
You got the wrong idea. You're supposed to write FP in a way where the side effect is highly layered and segregated away from pure code. IO are singularities within your chains of pure function compositions. As soon as you hit a singularity you have to break out of it as soon as possible.
The main idea is the meat. Keep your bread tiny and keep it very very separate from the meat.
The pattern is called Imperative Shell, functional core. Think of your IO as two pieces of bread in a sandwich and your pure code is the meat that connects the read to the write.
The game you're playing with haskell is to avoid letting the IO monad pollute any of your logic as much as possible.
Anyway that being said in applications where IO is all over the place... this pattern becomes largely ineffective. You basically have more bread than meat.
You would then need a monad to evaluate the things you're attempting to log. And at that point you have a monad, so you can log as usual?
Monads and side-effects aren't intrinsically related. Simplifying, a monad is something with flatMap() - in JavaScript, Array and Promise are monads (kinda). What flatMap() gives you is the ability to chain things, which is useful to sequence side-effects so that they can be performed by a machine in a given order. That's why IO and Eff are monads.
Or the problem with cleaning your room is that in the real world entropy only ever increases.
Even if LLMs completely remove the need for programming (a rather big IF), this is not time wasted.