It is useful to express a particular problem domain in a way that is efficient and makes sense.
I would be more interested in a discussion about concurrency because it becomes more relevant across the programming-language or paradigm barrier. Because at the end of the day we are confined by how the hardware operates and concurrency is very much a modern way of thinking as developers.
FSM/event driven is pretty easy even in rickety old languages like 'C'. C++ is a bit nicer because there is dispatch demuxing built in. I suppose the same is true of Java.
I guess webby stuff doesn't lead to the appeal of FSM but for everything else, it's generally a better path. FPGA guys use a lot of FSM and they simply don't have the same defect rates as software guys.
I've used toolsets like ObjecTime/Rose where "everything is a state machine" and It Just Works. I'm not sure why this approach is still obscure - the various works in Verification ( with a capital V ) seem to point to FSM as a means of reducing complexity.
C++14 and coroutines have replaced state machines in my world. They read like normal code, are easy to customize to your needs.
Had to convert a large PLC system from Beckoff structured text to C++. Coroutines allowed me to model it close enough that a little python translator made it work.
Setup actors with queues that timeout on interval and pump the (taskLets) Coroutines.
Create functions that provide blocking on what ever you like. Events, msgs or time.
Better IMO, compared to code shattered into bubbles in a tool or callbacks everywhere.
Why assume that "all are useful"?
I'd say that most of them developed ad-hoc, and whether they are useful or not is subject to actual research on their effectiveness in the field, with controlled studies etc.
Usually PL researchers go the other way around, toy with some things at the language/compiler/syntax/semantics level that think are cool/powerful/expressive etc but rarely bother with actual justification based on those that actually will use the PL, that is programmers.
Which makes the whole field cargo cultish and a pop culture, as Alan Kay notes.
Even if you learn something 'useless' (either to the current problem or in general), it allows you to discard certain solutions, and know why they are not a good approach.
I will say that it's virtually impossible to be purely an OOP programmer without using ideas that originated with imperative or functional languages. It's also pretty difficult to be a really good OOP developer without writing at least some code that looks very functional. There's a lot of ideas and techniques that help you as a developer no matter what paradigm you choose.
Because it's about the knowledge, and about doing what you do better, not merely about how much you earn.
"Give him threepence, since he must always make gain out of what he learns" -- Euclid said to his servant when a student asked what he would get out of studying geometry.
There's a difference between learning about it and using it at work. Your job may not have any good opportunities to apply FP, but you should be learning about it in your spare time given how many great use cases it has in today's world.
Down the road, as other developers pick up different paradigms and can potentially implement the same features you are writing, with a fraction of the bugs and in a fraction of the time, why would companies keep paying you six figures?
These features are useful whether the code you're writing is pure, stateful, object-oriented, functional, and/or whatever paradigm you write up tomorrow. Forget aesthetic arguments about elegance; where else can you build so much without ad-hoc concessions to individual language features? As far as I know, there's no other method that lets you treat powerful language features as libraries without either (a) making a mish-mash when they're used together like compiler plugins do (b) making debugging your compile process a full time job like various strongly-typed macro systems tend to.
Why not use FP to form the fundamentals of our languages, something where as of yet it has no equal, and build up the structures we need (regardless of programming tradition of origin) on top of that?
My point isn't that these things are easy to learn because their presentations are so brief; personally, dependent typing took me a long time for me to wrap my head around. What I think the briefness of the operational semantics in combination with the examples of things people have built within them suggest is that they're very versatile. Even if only people working on language level features bother to learn them, the utility of their creations (all without needing to move to another language or introduce possibly incompatible extensions) is a huge asset.
It looks like you're criticizing the GP's praise of Idris' concise formal definition of its unsugared semantics. I wouldn't use such a formal definition for educating novices (like I think you're implying in your next sentence); I'd use it professionally to show that many desirable things--effect tracking, compile-time checking, and design-by-contract--derive from the same thing. Ken Iverson in his notes on mathematical notation says that one criterion for a good notation is "suggestibility"--the notation should suggest that other problems similar to those you found just now could be solved as well.
Whether formality is useful for novices, it is certainly useful for experts.
No experienced developer I want to work with has the kind of dogmatic views that most of the people in this chat log have. FP and OOP are both just tools, how you use them is way more important than which tool you choose.
FP and OOP are both just tools, how you use them is
way more important than which tool you choose.
I agree with this. I would note though that experienced developers don't always consider them equivalent tools, for instance Carmack suggests defaulting to FP when possible (from here: http://www.gamasutra.com/view/news/169296/Indepth_Functional...):" No matter what language you work in, programming in a functional style provides benefits. You should do it whenever it is convenient, and you should think hard about the decision when it isn't convenient. "
Drop the dogma. Stuff like "OOP is terrible" or "FP is terrible" or "JavaScript is terrible", etc. are not only wrong but they are destructive only to you. Nobody else cares that you hold this view, it just keeps you ignorant about that thing and holds back your skill set and career. If you take the time to actually learn how to use the thing in question properly, what the good parts of it are and how good things not only can be made with it but are being made with it you will be better for it. Even if at the end of that you still decide you don't want to use it yourself (which is totally valid, we all have our preferences) you will be better off. You will be take the view not that "X is terrible" but that "X is not my preferred way of doing Y."
Second, don't waste your time bullshitting with a bunch of people who don't know anything about anything and just want to be "right" all the time. Where "right" here is actually "agree with the trendy position", which right now is "FP is good, OOP is bad." Talk to professional developers who actually do the work you want to do, and learn from them what it is they do and how they do it. To be sure, some people who hold those silly views do work as developers, but I would argue they're not professional. A professional sees tools as tools and not as a substitute for a personality.
Bullshitting is a great thing to do to blow off steam, we can't code 24x7, but there's constructive chat and then there's a useless echo chamber full of nonsense (which IRC is really good at forming.)
Anyway, hope that helps. I used to talk like the stuff I see in this chat when I was younger, then I realized what I just said about those opinions only being harmful to myself and now I try to be more open minded.
In mission critical portions of systems people will use a functional style and try to isolate state, while most things in an OO style will be just fine.
I also believe the crazy complexity of OO languages like Java is slowly being reigned in. With other languages like Go explicitly making trade offs towards simplicity.
For example, if one has to copy and paste because a language is too simple to provide a needed abstraction, then the code is needlessly complicated (through duplication) because the language is too simple.
Creating an object with runtime polymorphism in C++ is much simpler.
What makes you think that programmers who are used to writing in FP style for mission critical components would step away from FP for OOP for less important code? In my experience, most of the time functional code is often more succinct and easier to write. I don't think it would benefit anyone to write the majority of an application in the more verbose style.
>languages like Go explicitly making trade offs towards simplicity
I think Go is a counterargument to your point here. It's simplicity makes it much less suited for multi-paradigm programming than a complex language like C++ or Scala.
Java itself is a pretty simple language, and the complexity of programs written in Java is partially due to the spareness of the language.
But anyway, the creators of Go where right to fear generics, b/c generics are hard. However the more time passes, the harder it is going to be to add. And truth is it's inevitable for generics to happen in Go and when they'll do they'll be half baked.
And remember that Java has striking similarities. Java is also a very opinionated and anti-intellectual language.
Michael Fogus, author of “Functional JavaScript”, suggests in his blog post “FP vs OO, from the trenches” (http://blog.fogus.me/2013/07/22/fp-vs-oo-from-the-trenches/) that when he deals with data about people, FP works well, but when he tries to simulate people, OOP works well.
I wonder about shape of the data as it moves from domain -> range? How would they map to machine and biological models for learning? First guess is the that ML maps well onto FP; for CNNs, the data progresses synchronously from one layer to the next. For Bio, state is asynchronous; neurons, after firing have an efficacy period. Maybe super-fine-grain objects? Petri nets are interesting in that they can feedforward state asynchronously.
I personally would recommend #haskell-beginners on freenode IRC as well as http://haskellbook.com for the ones interested.
For everybody else, well I'm not a sale person and my job isn't to convince you. If it takes you 30 years to (re)discover Haskell, so be it.
I also think that's a specious argument. Many things influence a language's popularity that have nothing to do with the language's actual value. Haskell is very idiosyncratic compared to other mainstream languages, requiring a "relearning" of a lot of skills developers take for granted. Many people balk at the learning curve and never get into it. Haskell has also received comparatively little contributions from industry and was never the pet language of a major organization (like e.g. C#, Java, Go, Rust, Dart, etc). Also the influence of Haskell can be seen in a variety of areas, from other languages adopting more advanced type systems, type classes, systems like LINQ in C#, and the increasing popularity of functional programming in general. Finally, although popularity drives language ecosystem and therefore usability, Haskell's merits purely as a language stand on its own regardless of how many people use it.
The modern fast data structures packages date from 2007 to 2011 (containers, vector, text, and unordered-containers).
Recently there have been more breakthroughs like FB adoption and an unrivaled web API library in Servant (https://haskell-servant.readthedocs.io/).
Also we got Stackage and Stack recently (which ended "cabal hell"). Even "cabal hell" is gone with the new cabal-install improvements.
So don't count Haskell out too easily because it's old. It's an awesome language, but it took people a long time to figure out how to program well in it, given that it's so different from the normal way of doing things.
Seriously though I think that people are becoming interested in FP because they see the limitations of their current tools. That's what happened for me anyway.
Also I don't know that Haskell will become mainstream, but something that looks way more like Haskell than Java will.
To name a few:
1. Poor debugging tools. Unfortunately this is sort of intrinsically tied with non-strict evaluation. Typical evaluation stepping debuggers would be sort of unpredictable in haskell. Along these lines, haskell doesn't have stack traces enabled by default, and reading them is sort of tricky.
2. Clumsy exceptions model. There are asynchronous and synchronous exceptions, with the later being further broken down into exceptions in pure code or exceptions thrown in IO. Only exceptions thrown in IO are catchable. You can think of exceptions in pure code as similar to "panic()" calls in other languages, except they're even tricker due to lazy evaluation, and aren't necessarily guaranteed to be triggered due to slipperiness with laziness. Also since exceptions are used for interrupting computations (timeouts, for instance) and there isn't a way (other than some type class conventions) to distinguish between interrupting exceptions that should be allowed to propagate and exceptions due to "exceptional circumstances", catching all exceptions safely is a tricky matter.
3. Record syntax leaves a lot to be desired. It's incredibly easy to run into situations where you would have conflicting functions due to record syntax. Lenses are a partial fix for this, but its sort of annoying that something like row polymorphism isn't just a part of the language.
4. Laziness can make reasoning about time and space complexity trickier. Generally this is a little overstated, but you'll occasionally run into space leaks.
There are definitely real problems with the language, and some of them (such as poor debugging capabilities) can be legitimate showstoppers to use in industry, but generally I find that its advantages far outweigh its negatives.
Spoken as someone who prefers FP but is not a zealot.
This approach is equivalent to a family of functions (with side effects), one family per class, each with N(c) + N(m) arguments, where N(c) is the number of constructor arguments, and N(m) is the number of method arguments. Upon object construction, these families are partially applied to construct a new family of functions that contain only N(m) arguments. You can think of a constructor as a functor, that returns a number partially applied functions - the methods on the classes.
This is how I actually think of my OO programs these days. I also make liberal use of the actual functional tools made available to me. For instance, my bread and butter is C#, and LINQ heavily promotes a purely functional style when manipulating collections of data, though side effects are possible.
At a system level, I also tend to think of my data pipelines as functional transformations as well - their being written as an OO program is of little actual consequence.