But often this gets conflated in OO because objects are the hearth of modularization (via encapsulation) along with state and interaction and any number of other things.
---
As a comparison point, you might examine ML modules. They look a bit like this
module counter(X)
count : X -> Int
incr : X -> X
and they specify nothing more than the fact that some unknown type X satisfies the interface `(count, incr)`. We can then create a concrete implementation of such a counter incCounter : counter
incCounter = structure(Int)
count n = n
incr n = n + 1
The incCounter internally uses `Int` to represent `X`, but externally it's completely impossible to tell. This means that modules define exactly two things: encapsulation and interface.---
So why does this fall down in OO? Because objects lend themselves to being thought of as entities which move through time and space in a stateful fashion. This means you're also likely to encapsulate differences of entity without regard for how they might change together or apart.
Returning to Parnas' quote: it's a bad idea to decompose into modules based on a flowchart. Flowcharts allow you to emphasize the entities of your system, but they are not demonstrating the boundaries of change.
So you can probably get better OO design by being clear when you're using objects as entities (and thus perhaps you do not need encapsulation at all!) and when you're using them as modules. Once this distinction is made it can be clear when objects will derive from flowcharts and when objects will derive from selections of choices made by people.
This is too vague to be a "principle", and only causes confusion. It is about as precise, and as useful, as the "write good code" principle.
I consider myself fortunate to have written enough C++ to learn coupling and cohesion. When you screw these up in C++, you pay for it, over and over, through increased compile times and link times.
I think there are other factor at work here:
- The size of the project. I think most code written today is in the context of tiny systems, e.g. a bit of Javascript that goes with a single web page or some trivial backend code within an existing framework.
- The life cycle of the project. Most software projects have a short life time. This is for various reasons, some of them technical, some of them business related.
- You pay for bad design much farther down the road so it's harder for people to see the causation.
- The investment of individual engineers. If you move from one startup to the other every year you may not care that much about principles that will affect maintainability. You should but a lot of people don't. By the time coupling matters you'll already be hacking somewhere else. Being disconnected from the results of your work creates a difficulty in understanding design principles.
- Not Invented Here. People generally aren't open to receiving lessons from other people's experience. Learning often has to happen through individual experience.
I don't think there are many successful, large, long term, software projects where everything is a jumbled mess with no design intent and everything coupled with everything.
Its no wonder people either tend to discard them or to apply them incorrectly.
Given that data, its easy for a developer to extrapolate (more general) principles that may apply for their own concrete situations and problems, if any.
If you do the extrapolation of the principle yourself but hide the data that caused you to arrive to that principle, nobody will learn anything.
I have no idea why software architects do this. I guess because they're so used to abstracting all the details away in software, they start thinking it also applies to teaching/writing. It does not.
But I can't give that to you in something that you can read in an hour, or even a day. I might be able to give it to you in something you could read in a month (of 8-hour days reading). It's really hard to show in a one-page example.
"organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations" - M. Conway
More on http://c2.com/cgi/wiki?ConwaysLaw and http://en.wikipedia.org/wiki/Conway's_law
However, I also hate this mental model of software engineering because I have often found it easier to refactor the organization. Maybe because I'm a prima donna and only like working at startups.
I think it's better to require each portion of code has a narrow interface so you can reason in your mind easily about what that code segment does and should do into the future. A function or a class must promise what it should deliver given some inputs and not violate that expectation. If you ever had to reason about code using invariants, you'll grok this.
Essentially, we have a distributed set of `devices` which interact with `customers`. Each customer has a `session` with the device. During the session, the `customer` may make various types of `payments` (coin or credit card) for various types of `fees`. Additionally, the customer may receive one or more `tickets`. The data model is getting pretty big with:
* Devices
* Sessions
* Line Items
* Allocations
* Payments
* Adjustments
* Violations
* Fees
For accounting purposes, we need to be able to map our payments to the fees and violations they are paying for. Customers might make a single payment to cover multiple violations and those violations may be across multiple sessions.
The number of times I've heard "this new solution fixes X but breaks π" is frustrating, but I don't see how this could be separated out. Perhaps others have insights that would simplify all of this, but it seems to me that the essential fact of our system is that the payment/line item/allocation system is responsible for many tasks/reports. I read articles like this and pine for:
A) A project where true modularity is achievable.
B) The skills to make my current project truly modular.
"The Single Responsibility Principle (SRP) states that each software module should have one and only one reason to change. This sounds good, and seems to align with Parnas' formulation. However it begs the question: What defines a reason to change?" (emphasis added)
Ok I realize that languages evolve and all; and I see how the nearly universal appeal of using the phrase "begging the question" in this way will ensure it will soon make its way into the dictionary; but I think people should at least know the original meaning of the phrase [1] and that, in some pedantic or predominantly academic circles today, it is considered as incorrect usage. That is all.
Thank you.