>Objects just aren't supposed to be reaching into each other with 'getters' and 'setters' and messing with information.
The getX() and setX() is an anti-pattern of good OOP.
>Instead of using objects for compartmentalizing functionality, they were just used as a holding pen for loosely related functions.
No, good OOP has class/object with both private state and associated functions combined to provide a public interface for clients to use.
(Although Java & C# (unlike C++) may have muddled this aspect because their language specification does not allow "free" functions to live in a plain namespace. Therefore, programmers are forced to use the "class{}" keyword with static functions as a workaround. Maybe that's where your "holding pen" characterization came from?)
>A traditional OOP approach uses shadowy architecture to remove the responsibility of an object,
No, good OOP tries to push responsibility into classes/objects so they can act as independent agents with minimal coupling to the outer context they sit in.
I'm guessing your comments are based on seeing a lot of poor practices masquerading as OOP which affects what you think OOP actually is. Unfortunately, it's like trying to argue against Javascript because of overuse of "eval()" or arguing the flaws of Haskell because variable names are often 1 character long.
There are definitely problems with OOP but they are not the bullet points you mentioned.
I personally feel that the current OOP model eventually devolves into the bullet points you listed given enough complexity.
One reason why I think this occurs is that there is no concept of a 'message' in OOP, only methods or functions of a class. There's explicit coupling even when calling a method with no arguments because you know the method is valid for that object.
Contrast this with the example of the 'listener' in the room. A speaker broadcasts his message but the independent agents ultimately decide what to do with that message.
The OOP approach calls "object->listen" on each listener. My approach simply broadcasts the message and lets the objects determine how to handle it themselves.
This is specific to early-binding languages like C++, Java etc. Look at Smalltalk or Ruby, and this is not true in the general case. E.g. Ruby ORM's tend to dynamically create classes and methods at runtime based on analyzing the database schema of the database you connect to. You literally won't know if a given method exists until you've connected to the database. Even then, there are no guarantees - depending on framework it may e.g. let your call hit "method_missing" first time (or every time) and optionally then define a method (or it may continue to handle it via method_missing, but defining a method can be much faster, depending on situation)
> My approach simply broadcasts the message and lets the objects determine how to handle it themselves.
So OOP the way Alan Kay describes it, in other words.
If you mean there is no formal/explicit OOP syntax in C++/C#/Java etc for a "message bus" or "queue" for a decoupled publish/subscribe type thing, you're right. Yes, Golang/Erlang have that concept a little more "baked" into the language.
But that's still orthogonal to the supposed OOP flaws you brought up.
The C++/C#/Java approach to the missing "message" functionality would be either to create a class with a managed buffer to "hold messages" for other classes to write or read from... or use a library that interfaces with an external messaging bus.
If you actually look at the framework they are presenting, it pushes the developer towards many of the "good OOP" points you made. So I'm not sure exactly what you're arguing here.
No, my comments don't agree with that.
In fact, I tried to point out that his "traditional OOP" examples are incorrect OOP and therefore, a straw man to be arguing against.
This is correct and it's my mistake for not making this more clear.
A traditional OOP approach would have much of the functionality taken out of the player objects, using them simply to hold state.
Sounds like C structures. Back in the day, we were always urging and cajoling programmers to stop thinking this way and think more in terms of Objects that knew how to do things in response to messages.
The main reason controllers exist isn't (or shouldn't be) because of a failure of OOP as a concept, but because of efficiency or complexity. Going back to the baseball example, while it might make intuitive sense for the player to send a "running to first" message, simulating a team's reaction to that event by having player objects communicate purely by messages is horribly complex, whereas a fairly decent simulation can be quickly created by "cheating" with a team controller object (or game controller or whatever).
Even in a "pure" implementation, objects will want to examine each other ("how fast can that player run? Where is she right now? Now? Now? Where's the ball? Who has it? How good an arm do they have?")
If anything, the baseball metaphor shows how complex things can get, and quickly reveals cases where almost any programmer would choose to violate encapsulation to make things work.
The internal vs. external separation can often greatly simplify things. A player does not care where another players feet are they care where the best place to toss the ball is.
PS: In an actual game there is a lot of communication going on. You don't want to throw a ball at someone looking in the wrong direction. So, at some level you're stuck dealing with innate complexity.
I always felt that people who had never used C before C++ or Java, nor any other procedural language, aside from some scripting tended to some of the messiest OO. Seemed they'd want to create a sea of objects and patterns even when something simple was staring them in the face.
OO is harder to teach, and clearly harder to understand. That's the top and bottom of most issues with it.
That is a pretty damning judgement of OOP. It seems to me the whole point of a paradigm should be to improve understandability. If, in practice, it makes it worse, then it is a failed paradigm.
That messages and (at least single dispatch) methods are identical was settled decades ago.
(Has someone worked out a multiple dispatch OOP based on message passing?)
E.g. if I do "foo.bar()" in C++, the compiler will simply not compile the code if "bar" is not already defined on the class "foo" is declared as. The object at runtime has no influence on the dispatch.
In Ruby, on the other hand (as well as Smalltalk and other languages with late binding and message passing), regardless of implementation details, whether or not "bar" is not defined when the code is parsed is irrelevant. You can in the general case not know before reaching the call-site whether or not "bar" is defined, and even if it is not, it is not up to the interpreter to throw an error if e.g. a method_missing is defined, and you can't in the general case determine in advance whether or not there will be a method_missing defined when you get to the call site. The object can decide to process different messages depending on the time of day if it pleases.
See also the Alan Kay quote elsewhere in this thread - especially the part about late binding.
You can achieve this with e.g. vtable based dispatch (fill in "missing" slots with thunks that loads a symbol and calls method_missing, and fill in the "method_missing" slot with a generic one that raises an exception), so this can still lead to similar implementations. It's a different way of looking at things, coupled with the dynamism of very late binding.
This has been modeled out from message passing and procedural code semi-formally, with the first discussion (that named the duality) being available here: https://cseweb.ucsd.edu/classes/wi08/cse221/papers/lauer78.p...
And it's fine for multiple dispatch as well, btw. Higher kinded types reward everyone equally here.
> No, good OOP has class/object with both private state
If a class has state, you have to be able to read that state or it's useless.
It's a good practice overall to tell classes what to do but eventually, you need to get information from them. That's where getters come into play.
No, having a bunch of getABC(), getXYZ(), getEtc() is a code smell.
If the class has many getters()+setters() or has the equivalent of many public data members, it means that related actions requiring those variables are happening outside of the class/object instead of inside the class. The more getters()/setters() made available means the more the programmer is treating the class as a "dumb struct{}" with exposed members instead of a "smart agent" with knobs & levers to direct a hidden machine. The "knobs & levers" should be higher-level public methods that are not gets()+sets().
For example, let's say we have a TextBuffer object:
With get()/set() mentality:
TextBuffer.setLinecount(0); // reset counter to 0
for() {
TextBuffer.getNextLine();
n=n+1;
}
TextBuffer.setLinecount(n);
With a public method to make the object smarter about itself: TextBuffer.CountLines();
The method CountLines() replaces gets()/sets() and makes the object more "black box". The "for(){}" loop would be inside the object.Making objects "smarter about themselves" is a hallmark of good OOP. Less exposed getters() is less coupling and less pollution of classes' internals to the outside world. Refactoring/reducing gets()/sets() means unifying those outside actions acting on object's internals into a higher-level method interface.
I'm not saying all gets() can be eliminated. But the Java practice of having 20 private member variables mirrored by 20 public gets()/sets() is not what OOP is about. It's actually about as opposite from OOP as you can get!
This needs to be tattooed in reverse on many developers foreheads so it's the first thing they read when brushing their teeth in the morning.
So much of "Good OO" seems to be prescribing people's mental models and not their actual practices. Seems somewhat odd to me, since most of the models are indistinguishable from one another in practice.
I don't believe there's evidence of this. Inteface scoping (e.g. "private/protected/public") is a relatively recent invention to protect against programmers who "don't know what they are doing". Adequate documentation is a superior tactic since that's a cursory best practice.
Every programming paradigm and every programming language suffers the distinction of what is "proper" and "improper".
There's proper functional programming and improper functional programming. For example, it's possible to write bad code of passing GodUniverseState records to "functions" that the Haskell/Ocaml compiles without error. However, that's not "proper" functional programming. It's just messy global state masquerading as a functional program.
There's proper SQL usage and improper usage. For example, writing a stored procedure to loop through one table with a cursor, and then looking up a column value in another target table is "improper SQL". SQL has built in "joins" to do it correctly (and also performs faster as a bonus).
Even Python has a category of "not Pythonic".
No it isn't, it provides encapsulation, which the most important aspect of OOP.
I agree with the rest.
The extensive use of getters()/setters() is a common misunderstanding of "encapsulation". It's an example of following the "letter of the law but not the spirit of the law".
Let's separate the idea of "encapsulation" into 2 categories:
(1) syntax encapsulation: private int x; public getX(); public SetX(); // letter of the law
(2) conceptual/semantic encapsulation: Object.DoSomething() that works on x internally // spirit of the law
It's the idea presented in (2) that follows the ideals of OOP design and helps reduce cognitive load. The common coding practice of (1) doesn't really provide "encapsulation" that helps make large scale programs more understandable. I also think (1) was exacerbated by codegen tools such as ORMs and GUI data binding frameworks. Therefore, inexperienced programmers thought that having a ton of gets()/sets() in their own manually handcrafted classes was "correct OOP".
An example of where this can all go wrong is if you have a TCP object that opens a connection to a remote server and then lets you read and write the socket. Someone decides that they need to find out what IP address the object used when it connected and create a get_ipaddr() method that returns a s_addr type. But then the TCP object is updated to support IPv6 and now the author needs to figure out how to return a V6 IP address to external methods that only expect V4.
The proper way to implement it might have been to move whatever logic was examining the IP address into the TCP object itself. Of course this is how you end up with horrendously complex objects with hundreds of methods all used exactly once somewhere in the code or exactly once in the code of some other project.
Teams that work well act almost like they're mind-reading one another. High quality orchestration of disparate parts is in tension with encapsulation.
Actor-based programming is highly concurrent but in my experience it's harder to reason about as it scales up. Emergent behaviour of interactions between hidden states is sometimes the goal, and sometimes an unwanted side-effect. Network protocols are tricky to get right for a reason; splitting everything out into a shared-nothing message-passing architecture isn't a panacea.
I lean more towards explicit but immutable data structures, and referentially transparent functions over those data structures. In this world, parallelism can increase performance without observable concurrency. Concurrency is the root of non-determinism; non-determinism should be avoided unless it's inherent to the problem at hand.
OOP is a middle ground in this world, but it's ill-defined. Depending on style, it can be stateful and imperative, functional or message-oriented. OOP is not an absolute bad; with effort, it can be massaged, or herded, into working patterns. But it's certainly not a universal optimum.
I don't agree with that characterization. The author certainly has a point that, in some languages in particular, like java, there tend to be massive over-decomposition of problems into reams of factories, controllers, controller factories and controller factory manager factories, but that isn't OOP, that's due to cultural and syntactic issues with those languages. (I know, I know, No True Scotsman.)
In the Rails world, which is a non-trivial component of the broader OOP software world, there is a saying: "Fat Model, Skinny Controller" which is much more in the spirit of what the author is advocating, despite remaining OOP.
Again, this isn't to deny the authors general point, but I don't believe it is bound to OOP, so much as it is to a certain style of OOP coding that arose from early (java in particular) over-engineering and excessive decomposition.
-- Alan Kay, OOPSLA '97
http://erlang.org/pipermail/erlang-questions/2009-November/0...
Joe likes to be funny, so don't get upset and confrontational about it.
The central idea is this I think:
---
I now believe the following to be central to the notion of OO.
- Isolated concurrent things
- Communication through message passing
- Polymorphism
All the other stuff (inheritance, private/public methods, ....) has
nothing to do with OO.---
[1] http://harmful.cat-v.org/software/OO_programming/why_oo_suck...
Besides this idea of taking a label away from something you don't like is also a good strategy in general. He obviously doesn't think Erlang should do multiple inheritance. He just points out languages that have been calling themselves OO these years have been impostors.
Also not sure if I mentioned, Joe like to make witty jokes. So don't take it too seriously.
The commonly held definition of OO has drifted far from that ideal over the years.
This has nothing whatsoever to do with C++. It's like blaming a microwave oven because your kid put a ball of tin foil in it.
C++ is a great language when you are developing big compiled programs and need strong metaphors for decoupling and modularity. Most developers today work on distributed systems where the individual cooperating pieces that they write are much smaller. Your 1000 line HTTP handler in python won't benefit much from strong static type checking, but the linux kernel does, and so do a lot of the infrastructure components we all take for granted every day.
Alan Kay (progenitor of Smalltalk and OOP) has said on various occasions that it should have been called message-oriented programming, rather than object-oriented.
"I'm sorry that I long ago coined the term 'objects' for this topic because it gets many people to focus on the lesser idea. The big idea is 'messaging'"
http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-...
As a Lua aficionado I hate to see stuff like this:
Ball = {}
Ball.__index = Ball
function Ball.new (x, y)
local table = {}
setmetatable(table, Ball)
table.x = x
table.y = y
return table
end
Explicit setmetatable() call and manual __index setting? You can automate this and hide all the metatable magic = less code to write, less potential for bugs.E.g. in my own Lua object system the above would be:
Ball = Class()
function Ball:Properties(x, y)
return { x = x, y = y }
endBut that's clearly not the case, and so people have these radically divergent systems of programming within the framework of "object orientation".
My read of this article and it is very message-passing OO to me. Its broadcast mechanism is interesting, for sure. But it reads a lot like Erlang's supervision tree without all that troublesome thinking about a network.
But it's approach is still very "OO".
I use an additional message parameter, automatically inserted and maintained by the framework, which is a list of the messages (FIPA style operation + parameters) from the originating agent forwards to the point of debugging. This gets voluminous and is gated by debug levels.
The same holds true if you actually pass async messages on a bus - nothing stops you from attaching call details to the message. In fact, we have one very prominent async messaging system that does exactly that: E-mail (via "Received:" headers). (And yes, I've used e-mail as a message bus for applications before - Qmail worked great for that)
"Causeway, an open source distributed debugger written in E, lets you browse the causal graph of events in a distributed computation."
Message passing OOP doesn't necessarily imply that type of message passing. That is to say, the "send" call doesn't have to return until the target object has processed the message.
I think that obscures the better parts of the OOP view quite a bit.
In pure Actor Model adding two numbers would probably involve 2+ actors, yet Erlang is not doing this for some reason... I guess on a lower level hardcoding messages makes complete practical sense.
In fact, all of this seems like bullshit to me. The actual code inside of the repo is, well, object-oriented. How I interpret this is that the author seems to have no idea what they're even talking about, and that they write more about code than they write code itself.
Show me what you mean, don't just talk about it.
Also, of course it's object-oriented. The article is titled a "healthy" hatred for a reason.
In similar systems that I have constructed in the past, I have used a number of methods for passing actors in different states.
Pass a fully instantiated actor via a transfer process. Nothing changes for the actor, beyond reassigning parentage.
Pass a cleaned actor via a similar process to the bench and add process. This was used to allow an actor to be reduced to a resting state as it were. In most cases you could think of it as a resurrection method that allowed discarded actors to be reinstated with only specified base properties in place.
Anyway, I don't use Lua at all, but I like these type of actor messaging models. I look forward to reading through the rest of your documentation once it's done.
O.
Back in the dark ages, the performance difference between imperative code and functional code was too large for FP to gain much traction outside of academic circles.
By the time the hardware was anywhere near reasonable, OO had become a thing. People missed some of the key points Alan Kay had, latching on to the one thing that was immediately understandable: objects could model the nouns of your problem domain. That popularity meant that the mainstream was focused on OO, rather than FP.
I went through college in the late 2000s, at a well respected university, for computer science. I had classes devoted to OO(A/D/P); I had none devoted to FP. If you were exposed to it, it was via having to learn a Lisp in the AI class, or similar.