This comment sure indicates to me where you most likely are on the curve.
In all seriousness, I think this is considerably off the mark. After enough experience you realize that expressivity and convenience are antipatterns and don't actually simplify things but are harbingers of complexity, bugs, tech debt, even the downfall of organizations and products.
Saying it is all ifs and for-loops is completely true. Everything else, all the abstractions and high level features, are just sugar.
I try to keep a healthy and balanced diet, myself.
> industry is slow to assimilate most state-of-the-art ideas, sometimes by as much as 40 years.
Most of those ideas are terrible. The industry is so incredibly young and has experienced so much change over those 40 years that I have a hard time accepting the notion that the industry is slow to adopt. The reason the old building blocks are still popular is because they are a thin abstraction over how computers work, and ultimately that is at the root of everything we do.
If+for have no deeper foundational significance in the construction of programs or computations, literally, than say a lambda function. But because the latter is unfamiliar, it's spoken about in the same manner you present: as if it is some highly abstract, complicating, high-level feature (when truly that take is just baloney).
Because the "expressive abstractions" are much easier to reason about and save programmers lots of mental effort. And, as I have commented upthread, ifs and for loops are by no means the only such abstractions.
> because the latter is unfamiliar, it's spoken about in the same manner you present: as if it is some highly abstract, complicating, high-level feature
If expressing your program in the lambda calculus is easy for you to reason about and saves you enough mental effort, go for it. But don't expect your code to be readable or understandable by many other people. The reason why ifs and for loops (and other "expressive abstractions", since as I have said, those are by no means the only ones) are ubiquitous in programs is that they are easy for lots of programmers to reason about. Whereas the lambda calculus is only easy for a very small subset of programmers to reason about.
We now have a chicken-egg problem. I can freely admit that for+if is easy for programmers to understand solely because of how we are educated, and not due to any Bruce Lee hocus pocus about simplicity or fundamentalism, as so many others here suggest.
A programmer who, say, learned from SICP first would find a for loop awkward and bizarre when you could "just" tail-recurse.
But symbol calculus is a highly abstract, complicating, high-level system assembled out more reality-based systems beneath it. If it seems simple to you, you're just under the curse of knowledge.
And I don't know what a "reality-based system" is.
It’s not like the machine code will look much closer to your C code either. That’s also a spell of “compiler writers and hardware vendors trying to uphold the view that C programmers are so close to hardware and that memory access is flat”.
In languages and libraries that allow FSM and pure functional kernel based designs you can get just as clear logic that is expressible not just to the programmer but also to business personnel. It's counter-intuitive to a certain extent because so much of programming is built around imperative programming but FSM based logic is and will continue to be easier to understand long term because you can trivially visualise it graphically. This ultimately is what a lot of the functional paradigm is built around. Use the mathematical and graphical representations we've used to understand systems for decades. They are well understood and most people can understand them with little to no additional education past what they learned in their business or engineering bachelors degrees.
Most professions have a healthy respect for the base materials they work with no matter how high the abstractions and structures they build with it go. Artists know their paints, stone, metal, etc. Engineers know their melaterials as well. They build by taking the advantages of each material into consideration, not assuming that it's no longer relevant to their job because they get to just work in I-beams. Programmers would do well to adopt a healthy respect for their base materials, and it seems like often we don't.
I disagree with how your use of "just" here. It's common for programmers to dismiss the importance of syntax but syntax and notation are the interface and UX between the language semantics and your brain. It's no less important to get this right. There's a half-joke that Europe was able to rapidly advance in Calculus beyond Britain due to the superiority of Leibniz notation.
> healthy respect for their base materials
What's unique about computers is the theoretical guarantee that the base does not matter. Whether by lambda calculus, register machines or swarms of soldier crabs running from birds in specially designed enclosures, we're fine as long as we appropriately encode our instructions.
> bunch of branches and loops
You could also easily say it's just a bunch of state machines. We outsource tedious complexity and fine details to compiler abstractions. They track things for us that have analogues in logical deduction so that as long we follow their restrictions, we get a few guarantees. When say, writing asynchronous non-deterministic distributed programs, you'll need all the help you can get.
Even designing close to the machine (which most programs will not need) by paying attention to cache use, memory layout, branch divergence or using SIMD remain within the realm of abstractions.
It is a must, because decades of business requirement built on top each other without understanding the whole is complex. Writing a JIT-compiler that can routinely change between interpreting code and executing it natively, a database optimizing queries, a mathematical library using some fancy algorithm are all complex, in a way that is not reducible.
Complexity easily outgrowth even the whole of our mathematics, we can’t prove any non-trivial property of a program, halting problem, etc.
So all in all, no, we can only respect our “base materials” by finding the proper abstraction for the problem, as our base material is complexity itself. It might be for loops and ifs, but it very well be a DSL built on top of who knows how many layers, because at that abstraction level can we even start to map the problem domain to human consumable ideas.
In my experience programming programming with primitives and basic flow control operations frequently tends to be at least be order of magnitude faster than more complex state management paradigms. Compilers are very good at optimizing that style of code. Loops often get unrolled, the and the branch predictor is kept happy. A good compiler may use vector expressions.
In many cases with cold code it flat out doesn't matter, the CPU is plenty fast, but when it does matter, explicit if-and-for code absolutely mops the floor with the alternatives.
You can optimize one specific query (quite painstakingly, I believe) to beat a general db, but it is very far from clear that “for loops will beat higher level abstractions”, imo.
I’m yet to see a secretary who could “return a new table state such that as if documents became a right fold as binding together a map of composition of signing and copy routines over documents” instead of “get these documents from the table, copy, sign and put them back in a binder”. This is a nonsense some of us want to believe in, but it is not true.
If the project had instead been designed to have less unnecessary state and “transitions” it would have been a lot easier to make changes.
All those ideas sound good by themselves but they are really bad for “defensive” coding. Good luck selling a project to refactor something when it’s going to take multiple people-years. Once you’ve made the mistake of committing to an inflexible design it’s either that, replace/ignore the thing, or deal with low productivity as long as the thing exists.
Imperative code:
Take A and B.
Add C ml of Water.
Stir for 2 minutes.
If it thickened, add D grams of flour, else go back to stiring.
This is easily understood by everyone, degree or no.
Once someone figures out loops and the difference between statement and expression, they can essentially understand imperative code.Imperative code quickly devolves to a point where it is no longer understandable by anyone but the developers who wrote it.
For problems where those are the right tools, sure. But they aren't the right tools for all problems any more than ifs and for loops are.
You could just as well say that ifs and for loops are just sugar for gotos and all programming is just gotos.
The reason ifs and for loops are used instead of gotos is that they are very useful abstractions that are easy to reason about and save the programmer lots of mental effort. But they are not the only such abstractions.
To the extent that other abstractions can create problems, it's not because they're really just sugar for ifs and for loops, it's because they are not well crafted abstractions so they are not easy to reason about and don't really save the programmer any mental effort. But there are plenty of abstractions other than ifs and for loops that are well crafted and do save the programmer mental effort, in many cases lots of it.
Suggesting that experience leads to jettisoning expressivity is at odds with my direct observations of experienced software engineers working in large teams. The more experience, the _better_ the engineer gets at picking the right level of abstraction to write code that can be maintained by others. Picking a single point on the abstraction spectrum (just above goto but not below it!) is far too rigid for the diversity of tasks that software engineers need to solve.