What I would like to see is a logical introduction to computer science, or at least theoretical computer science.
Start with combinational logic [1], i.e. with Boolean circuits. They are both conceptually simple and relatively close to physical transistors, unlike any functional / mathematical approach. Then move on to sequential logic[2] which allows the introduction of memory/states, e.g. via flip-flops. From this, more complex circuits and even a primitive GOTO language would be introduced. What I would be interested in is how these circuits relate to the traditional models of computation, i.e. finite state machines, pushdown automatons and Turing machines. Not very cleanly, I suspect.
Lots of elite CS courses start there, Cambridge did even when I was applying thirty years ago, Oxford does these days (back then it didn't acknowledge CS as a "real" subject, you were basically a mathematician and you'd just be studying this oddly practical sub-discipline of mathematics). Both teach an ML today. The place I studied began with an ML then too (today it begins with Java, which is I think inferior but they get $$$ so...)
My unconsidered guess is that your "begin with booleans" thing just gets to arithmetic via a long winding route, and either as it approaches arithmetic, or just before, it accidentally gets infected with Gödel incompleteness so you are no better off, with the same problems but maybe a greater appreciation of why they were unavoidable, except maybe you're very tired.
The fact that the abstraction "logical circuits" is much closer to actual computers than any "mathematical" or "functional" abstraction casts doubt on this claim.
Though I'm not sure whether they relate this approach to the traditional FSM/PDA/TM models of computing. There seems to be a disconnect between theoretical computer science (which uses the classical models) and more "practical" computer science which uses Boolean and sequential logic circuits as a model of computing .
Finite state machines are a small step away from sequential logic and help with managing complexity. Pushdown automata are a small step from finite state machines. I think of Turing machines as an architecture for a computer consisting of a separate CPU and memory, which happens to correspond to the most common architecture used today, but not a particularly useful tool for managing complexity as they have no structure. Functions are useful for managing complexity, and function call / return requires some notion of a stack. Once you're there you can build up the rest of FP. Pure function correspond to combinatorial. Those with state correspond to sequential logic. etc.
Going in the reverse direction, compilation connects functional programming to hardware.
I could never grok scala, all the examples and learning material used math idioms, metaphors and symbology. So I was always translating and it was very difficult to pick up.
And yes, I fully realize this was my own shortcoming. I should have put eight years into the foundational knowledge. But I wonder if these math metaphors translate to a more broadly shared experience. They should be in theory. Programming is supposed to be a tool to accomplish goals, it shouldn't force users into it's inner world as much as it does. It feels like I need a formula one crew just to drive a car from Seattle to Kansas.
It's still very much WIP, but this chapter is complete and should be approachable if you know the basics of Scala: http://www.creativescala.org/creative-scala/cycles/
Unfortunately, I haven't managed yet to integrate Racket into my daily work. The last time I tried to use it, the resulting (manually optimized) compiled code was as slow as an unoptimized python solution and 10x slower than a manually optimized Java version.
Quite some libraries available too https://github.com/avelino/awesome-racket
https://hn.algolia.com/?dateEnd=1688294284&dateRange=custom&...
https://news.ycombinator.com/item?id=36312603 (315 points | 18 days ago | 99 comments)
https://en.wikipedia.org/wiki/Structure_and_Interpretation_o...