I have one question for you that I feel I have to ask: have you actually practiced commercial software development on a team (say, 5+ developers on the same codebase) in this manner that you describe for a significant period of time (say, 2+ years)? and if so, do you feel the resulting software has been robust and successful?
If you want a small project that I wrote to look at, see henhouse https://github.com/mempko/henhouse. I wrote an article talking about design by contract and why it's better than TDD here https://mempko.wordpress.com/2016/08/23/the-fox-in-the-henho...
I've built a computer vision system that processed petabytes of data with only one or two bugs in production for any given year. At any given time we kept a bug list of zero. For the last five years I built a trading system using this same process. Again, we don't keep a bug list because if there were bugs, the system wouldn't even run. And if there is a bug we have to fix it immediately. We do have tests, but they are system tests. The worst bug we had that took too long to catch was in a part of the system that DIDN'T use contracts.
Design by Contract is a secret weapon people don't seem to know about.
Also see https://github.com/mempko/firestr, which I used Design by Contract extensively. It's a GUI program so the user can do crazy things.
Learn Design by Contract. Do system testing. Do exploratory testing. Do fuzz testing. Keep a bug list of zero. Don't waste time on unit testing.
If you are looking for popular software built using Design by Contract to various degree.
See SQLite (uses assertions heavily), .NET framework from Microsoft, Many of the C++ Boost libraries, Parts of Rust, Parts of Qt. The Vala programming language, Ada GNAT... and many others.
Here is a research paper from Microsoft that shows the advantage of contracts and code quality. https://www.microsoft.com/en-us/research/wp-content/uploads/...
I can see it working if you have very few developers touching each piece of code, or if you get to exert control over the final application. But I don't see how it can work for large codebases or teams over long periods of time (read: large businesses)... especially not for library development, where your team is only responsible for providing library functionality (like string.indexOf(string) in my example, or matrix multiplication, or regexes, or whatever libraries do) and you don't necessarily even know whom the users are. There is no "system" or "integration" at that point, you're just developing one layer of code there, which is the library -- a piece of code responsible for doing just one thing. How the heck do you make sure arbitrary team members touching code don't end up introducing silly bugs over time, if not with unit tests?
Have you built any commercial libraries in this manner, rather than applications? i.e. where everyone on your team is jointly responsible for your library's development (like the implementation of string.indexOf(string) in my example), and other folks (whether inside or outside the company) are the ones who piece together the libraries to create their final application(s)?
Note also types are a contract. TypeScript is basically introducing contracts at a high level to javascript.
> So you'd prefer your contracts to blow up your bugs in your clients' faces, rather than catch bugs yourself prior to releasing the code to them?!
That's you arguing against contracts. Contracts need to blow up when users use the software (including you and your testers before you ship). You should ship if you find no contracts blowing up. But you need to let them blow up in user faces too. They provide valuable information AFTER SHIPPING. Otherwise they lose a lot of their value.
Saying contracts shouldn't run in shipped code misses the whole point about what contracts are.
> That... is literally what unit tests do. Test the software before you give it to users.
No, unit tests test a portion of the software before shipping. My argument is they aren't worth the cost and provide little value. Most of the value comes from SYSTEM tests, integration tests, exploratory testing, fuzz testing, etc. Unit tests are the weakest form of testing.
Here is a great argument against them called Why Most Unit Testing is Waste by Coplien. It's a dense argument which I agree with.
https://wikileaks.org/ciav7p1/cms/files/Why-Most-Unit-Testin...
> That's you arguing against contracts.
No. Notice what I wrote earlier? Where I very specifically said "contracts are awesome but not substitutes for unit tests"?
That's exactly the same thing I was saying here. I was arguing against relying on contracts to catch the bugs unit tests would've caught. Nobody was ever telling you to avoid contracts anywhere. Like I said, they're awesome, and both are valuable. I'm just saying they don't substitute for your unit tests. Just like how screwdrivers don't substitute for hammers, as awesome as both are.
> Saying contracts shouldn't run in shipped code misses the whole point about what contracts are.
I never said that, you're putting words in my mouth.
> No, unit tests test a portion of the software before shipping. My argument is they aren't worth the cost and provide little value. [...]
I just gave you a detailed, point-by-point explanation of what you've been missing in the other thread with your own purported counterexamples: https://news.ycombinator.com/item?id=41287473
Repeating your stance doesn't make it more correct.
There were no unit tests, as a) the code wasn't written in a style to be "unit" tested, and b) what the @#$@Q#$ is a unit anyway? Individual functions in the code base (a mixture of C89, C99, C++98 and C++03) more or less enforced "design by contract" by using calls to assert() to assert various conditions in the code base. That caught bugs as it prevented the wrong use of the code when modifying it.
Things only got worse when new management (we were bought out) came in, and started enforcing tests to the point where I swear upper management believed that tests were more important than the product itself. Oh, the bug count shot up, deployments got worse, and we went from "favorite vendor" to "WTF is up with that vendor?" within a year.
It sounds like you, too, were doing application development rather than library development. By which I mean that -- even if you were developing a "library" -- you more or less knew where & how that library was going be used in the overall system/application.
That's all fine and dandy for your case, but not all software development has the luxury of being so limited in scope. Testing the application fundamentally misses a lot more edge cases than a unit test would ever miss. And setup/teardown takes so much longer when every single change in some part of the codebase requires you to re-test the entire application.
When your project gets bigger or the scope becomes open-ended (think: you're writing a library for arbitrary users, like Boost.Regex), you literally have no application or higher-level code to test the "integration" against -- unit tests are your only option. How else are you going to test something like regex_match?
> what the @#$@Q#$ is a unit anyway?
https://res.cloudinary.com/practicaldev/image/fetch/s--S_Bl5...
P.S. I have to also wonder, how much bigger was the entire engineering team compared to the 3-7 people you mention? And if it was significantly bigger, how often were they allowed to make changes to your team's code? It seems to me you probably tight control over your code and it didn't see much flux from other engineers. Which, again, is quite a luxury and not scalable.
Towards the end, management was asking to test for negatives ("Write tests to make sure that component T doesn't get a request when it isn't supposed to," when component T was a networked component that queried a DB not under our control). Oh, and our main business logic made concurrent requests to two different DBs and again, I had to write code to test all possible combinations of replies, timeouts and dropped traffic to ensure we did The Right Thing. Not an easy thing to unit test, as the picture you linked to elegantly showed (and, you side stepped my question I see).
The entire engineering team for the project was maybe 20, 25 people, but each team (five total) had full control over their particular realm, but all were required for the project as a whole. Our team did C and C++ on Solaris; three teams used Java (one for Android, and two on the server side) and the final team did the whole Javascript/HTML/CSS thang.
You're right that we didn't see much flux from the other teams, nor from our customer (singular---one of the Oligarchic Cell Phone Companies), but that's because the Oligarchic Cell Phone Company doesn't move fast, nor did any of the other teams want do deal with phone call flows (our code was connected to the Phone Network). We perhaps saw the least churn over the decade simply due to being part of the Phone Network---certainly the other teams had to deal with more churn than us (especially the Android and JS/HTML teams).
Also, each team (until new management took over) handled deveopment differently; some teams used Agile, some scrum, some none. Each team had control. Until we didn't. And then things fell apart.
If I was developing a library, the only tests I might have would be to test the public API and nothing more. No testing of private (or internal) code as that would possibly churn too much to be useful. Also, as bugs are discovered, I would probably keep the code that proves the error to prevent further regressions if the API doesn't change.
One thing I did learn at that job is never underestimate what crap will be sent your way. I thought that the Oligarchic Cell Phone Company would send good data; yes for SS7 (the old telephony protocol stack), but not at all for SIP (the new, shiny protocol for the Intarweb age).