The biggest thing I've seen blow up actual OOP projects has been a lack of respect for circular dependencies in the underlying domain. If you have one of those problems where it is ambiguous which type "owns" another type, then the moment you start writing methods in these types you are treading into the dark forest. Often times it is unclear that your problem exhibits circular dependencies until code is already being written and shipped.
My approach to these situations is to start with a relational data model. A SQL schema (and its representative DTOs) can model circular dependencies competently. You can then have additional object models (views) that can be populated by the same relational data store (just a different query). One other advantage with the relational modeling approach is that it is very easy to explain things to the business before you write a single line of code [0]. The purpose of a SQL table can be demonstrated with a sample excel sheet with mock business data.
This path was largely inspired by Out of the Tar Pit [1] and practical experience in fairly wicked domains (semiconductor mfg., banking, etc). I am not sure Functional Relational Programming is the answer for everything, but the "Relational" part certainly seems to be universally applicable.
[0]: https://en.wikiquote.org/wiki/Fred_Brooks#:~:text=Show%20me%....
OOP will let you express this kind of problem from a data modeling perspective (List<T> on each type), but from a serialization and dependency perspective you have to pick a "winner". In banking, it is unclear which type should be king.
The relational approach is to use a join table. Model the actual relationship itself and its relevant attributes (role on the account, beneficiary %, etc). This also handles any arbitrary graph of accounts and customers, assuming you are using a modern database engine that supports with recursive & cycle keywords.
ORMs will blow up on this kind of thing without special handling.
FHIR, an international standard for medical data, is a great example here. The circular types get so gnarly that I've personally managed to infinite-recursion the TypeScript compiler. We even managed to get 45min builds followed by a timeout by sneezing wrong.
https://www.hl7.org/fhir/overview.html
The tldr is that you can have things that belong to things that belong to things that end up belonging to the original thing after a very very long inscrutable chain of pointers. Essentially a graph.
But one thing stuck out: I never liked most of the distillation of actions as clean architecture would advocate for, be they lambdas in class form (DepositAction) or interactors. I feel strongly that they are the right thing in describing business logic, however.
What did click for me is the conceit of a service, which the JVM world embraces. Services are plain old objects that have a method for each action you'd like to model. This is nicer than the aforementioned approaches because there is often common code to different actions. Like actions, services are the place where validation happens, persistence happens, and all of the interesting business logic. IO and orthogonal concerns are injected into them (via constructor), which lets you write tests about the core logic pretty easily.
What you get is the ability to reason about what happens without the incidental complexity of the web. Web handlers then boil down to decoding input, passing it to the service, examining the result of calling an action, and then outputting the appropriate data.
That's all they should've ever been doing. :)
- enable/disable/manage caching layers to procedure... uh... method calls
- play tricks with remote invocations that look like local ones
- enable advanced testing frameworks with mocked parameters and data connections
- enable/disable logging at runtime, and target specific services
- "aspects" to target various patterns of invocation and inject interceptors/decorators/etc to the procedure... uh... method call.
JVM service management OOP is a very different thing than domain/data object modeling OOP. JVM service management OOP is a slam dunk in terms of delivered value. In the referenced Alan Kay bullet points, it is mostly extreme late binding, and kind-of message passing that delivers tremendous value.
Data modeling OOP and GUI framework OOP is the old dog is-a animal, but is-a pet and all that headache. Because Java's behavioral compositional model is basically single inheritance (yeah, there's interfaces if you want to copy-paste or write your own delegates), it is fundamentally limited. That is the OOP that has squarely and properly been questioned over the last 10 years.
My own definition has always been that an object has state and identity.
I have never considered functions/methods to be a requirement for something to be an object.
What the author seems to be saying is that if someone asked you to sell OOP, one way you'd sell it is by mentioning that an object can couple logic and state. That's a distinguishing factor between objects and other data structures.
Depends on the OOP paradigm. This is not always the case.
Functions/methods/actions/operations are just different names for the operations which mutate the state of the object. So, I would argue that they are a necessary attribute of mutable objects.
Another way to look at it is that property setters (or whatever mechanism is used to directly mutate an object’s sub-data) is not meaningfully different from a method doing the same. You could even call it syntax sugar for the same.
I have often used data-only objects, and passed them to fixed-context functions.
It's a cheap way to get OO behavior, in non-OO languages.
There's another universe of object-adjacent systems that may or may not be connected with conventional OO programming languages. Two examples I'd point to are Microsoft's COM (designed so it is straightforward to write and call COM objects from C) and the "objects" in IBM's OS/400. In both of those cases I think the reification is the important thing, although you can see reification in Java's object headers where objects get a number of attributes necessary for garbage collection, concurrency control, etc.
I worked on a system back then that had “objects” that was based heavily around function pointers.
When we first got hold of the cfront C++ pre processor it did much the same thing but automated all the kludges we had for compile time checking.
So I wouldn’t really class something like COM as object adjacent, it was more “proto” OO
It may be worth pointing out that while Smalltalk is arguably one of the key languages that popularized such ideas and other OOP concepts, these were first introduced by Simula 67.
Generally, I don't know how to (philosophically) navigate the tensions between Functional / Object Oriented / Imperative / Declarative paradigms, except to remind myself about The Thing That Actually Matters (in my estimation)... to always remember that The State is the frenemy.
For the love of State is the root of all evil: which while some coveted after, they have erred from Lambda the Ultimate, and pierced themselves through with many sorrows. --- Yours Truly.
:)
[1] https://www.evalapply.org/posts/what-makes-functional-progra...
(edit: forgot to link to blog post)
Well, then it is the context who is the actual receiver, isn't it? Since in this scenario the object may never even receive the message since the context has already processed it on its own, so calling it a "receiver" would be incorrect.
> a composite object of an int and a float receive message „add“ - who decides which implementation to use
The composite object itself, who else? It can do anything, including doing nothing, or not using any of their implementations, or dividing its int by its float, etc.
So from that perspective an int-float object would be built by composition and the containing object would receive and process the message before dispatching to it's component objects as it saw fit to accomplish the task of being an int-float object.
If a message is a first class entity, then an object can technically have only one "function call" -- receive message.
So that an object can simulate an arbitrary set of functions with arbitrary arguments, the implementation of the "receive message" function cannot impose conditions on the format or content of the message. Instead, the message must be encoded in a self-describing format so that the object can interrogate the message and its contents, and only then decide what to do with that message, up to an including ignoring the message entirely.
To make this more concrete, imagine having an JavaScript object that has only one method: receive messages encoded as JSON strings. With JSON strings we can say that the message is self-describing and is easily parsed by the object. Once the JSON string is parsed, the object can then decide what to do based on the content of the message. This is both a late-binding and a dispatching activity.
It should be clear that the version of OOP does not include anything about types. That's because OOP was designed with LISP-like languages in mind, where symbols were processed and strongly-typed objects. It also means that build-/complile-time checking wasn't possible.
I'd say the modern web with JavaScript and HTTP calls is more like the original OOP design than any modern "OOP"-like programming language.
Reification has two meanings in software development. Author uses one, the more familiar one - it’s discussed in many books on DDD. The other is a bit more vague and unapproachable. It’s the one I want to bring to peoples attention. It can be summed up as: “Accounts, transactions and balances are not enough.”
Jim Weirich hints at this perfectly in his approach to solving the Coffee Maker [0]. I’ll quote the punch line for everyone:
> Some people may be uncomfortable with the divergence of the Analysis and the Design models. Perhaps they expect that the design model will just be a refinement of the analysis. Remember that analysis is an attempt to understand the problem domain. Although we use OO tools to represent our understanding, that understanding has only an indirect influence on the structure of the software solution (in that the software must actually provide a solution). In fact, as we refine our software design, we will find that it moves even farther away from the analysis model. Solutions oriented classes, which do not appear in analysis, will be added to the design. Classes that came from the analysis model may mutate or even disappear entirely.
This is the form of reification that I believe is more closely associated with how the term is used in classical OOD (f.e. Object Oriented Software Engineering: A Use Case Driven Approach), but I suppose people’s experiences may disagree. This is a kind of “right layer of abstraction” that has to be pontificated and not just put down as an object because it’s a process or a “thing” from the outwardly-behavioural view of the situation.
As I mentioned, it’s not talked about enough. Jim is right to call it out: it’s uncomfortable because it’s having to dig deep into the system ontology.
[0] http://www.cs.unibo.it/cianca/wwwpages/ids/esempi/coffee.pdf
There are two types of objects: State objects and Tool objects.
The purpose of a State object is to maintain a state. Think strings, databases, files etc.
The purpose of a Tool object is to operate on State objects. Think loaders, printers, editors, transformers etc.
It’s a simple way to organise OO code. And it works. At very large scale (millions of lines of C++).
It is similar to the functional Data/Function thinking. However it actually works better because State objects can enforce constraints, and Tool objects can cleanly maintain an internal temporary state while operating on State objects.
That's what I thought this article is going to talk about but not even a honorary mention.