I personally have kind of moved away from TDD over the years, because of some of these reasons: namely, that if the tests match the structure of the code too closely, changes to the organization of that code are incredibly painful because of the work to be done in fixing the tests. I think the author's solution is a good one, though it still doesn't really solve the problem around what you do if you realize you got something wrong and need to refactor things.
Over the years I personally have moved to writing some of the integration tests first, basically defining the API and the contracts that I feel like are the least likely to change, then breaking things down into the pieces that I think are necessary, but only really filling in unit tests once I'm pretty confident that the structure is basically correct and won't require major refactorings in the near future (and often only for those pieces whose behavior is complicated enough that the integration tests are unlikely to catch all the potential bugs).
I think there sometimes needs to be a bit more honest discussion about things like: * When TDD isn't a good idea (say, when prototyping things, or when you don't yet know how you want to structure the system) * Which tests are the most valuable, and how to identify them * The different ways in which tests can provide value (in ensuring the system is designed for testability, in identifying bugs during early implementation, in providing a place to hang future regression tests, in enabling debugging of the system, in preventing regressions, etc.), what kinds of tests provide what value, and how to identify when they're no longer providing enough value to justify their continued maintenance * What to do when you have to do a major refactoring that kills hundreds of tests (i.e. how much is it worth it to rewrite those unit tests?) * That investment in testing is an ROI equation (as with everything), and how to evaluate the true value the tests are giving you against the true costs of writing and maintaining them * All the different failure modes of TDD (e.g. the unit tests work but the system as a whole is broken, mock hell, expensive refactorings, too many tiny pieces that make it hard to follow anything) and how to avoid them or minimize their cost
Sometimes it seems like the high level goals, i.e. shipping high-quality software that solves a user's problems, get lost in the dogma around how to meet those goals.
If you have the cash, spring for Gary Bernhardt's Destroy All Software screencasts. That $240 was the best money my employer ever spent on me. Trying to learn TDD on your own is asking for a lot of pain, and all you'll end up doing is reinventing the wheel.
There are a lot of subtle concepts Gary taught me that I'm still learning to master. You learn what to test, how to test it, at what level to test it, how to structure your workflow to accommodate it.
(Apologies in advance as I can't figure out how not to sound snarky here.)
Isn't that called "the design? And is there any meaningful way in which, if "test-driven design" fails if you don't already have the design, it's worth anything at all?
As a point of semantics: TDD generally stands for "test-driven development," not "test-driven design," though the article here does make the claim that TDD helps with design.
To reduce my personal philosophy to a near tautology: if you don't design the system to be testable, it's not going to be testable. TDD, to me, is really about designing for testability. Doing that, however, isn't easy: knowing what's testable and what's not requires a lot of practical experience which tends to be gained by writing a bunch of tests for things. In addition, the longer you wait to validate how testable your design actually is, the more likely it is that you got things wrong and will find it very painful to fix them. So when I talk about TDD myself, I'm really talking about "design for testability and validate testability early and often." If you don't have a clue how you want to build things, TDD isn't going to help.
If you take TDD to mean strictly test-first development . . . well, I only find that useful when I'm fixing bugs, where step 1 is always to write a regression test (if possible). Otherwise it just makes me miserable.
The other thing worth pointing out is that design for testability isn't always 100% aligned with other design concerns like performance, readability, or flexibility: you often have to make a tradeoff, and testability isn't always the right answer. I personally get really irked by the arguments some people make that "TDD always leads to good design; if you did TDD and the result isn't good, you're doing TDD wrong." Sure, plenty of people have no clue what they're doing and make a mess of things in the name of testability. (To be clear, I don't think the author here makes the mistake of begging the question: I liked the article because I think it honestly points out many of the types of mistakes people make and provides a reasonable approach to avoiding them.)
That said, i do still find that while test-driven development doesn't itself create good design, it is a useful tool to help me create good design. I have a bite-size piece of functionality to write; i think about what the class should look like; i write tests to describe the class; i write the class. The key thing is that the tests are a description of the class. The act of writing down a description of something has an amazing power to force the mind to really understand it; to see what's missing, what's contradictory, what's unnecessary, and what's really important. I experience this when i write presentations, when i write documentation, and when i write tests. The tests don't do the thinking for me, but they are a very useful tool for my thinking.
I do gather some places do things differently though. Must be nice.
It's difficult to ignore the solution that is staring my brain in the face and pretend to let it happen organically. I know that I will end up with a worse design too, because I'm a novice at TDD and it doesn't come naturally to me. (I'd argue that I'm a novice at everything and always will be, but I'm even more green when it comes to TDD)
I have no problem writing unit tests, I love mocking dependencies, and I love designing small units of code with little or no internal state. But I cannot figure out how to let go of all that and try to get there via tests instead.
I don't think that I'm a master craftsman, nor do I think my designs are perfect. I get excited at the idea of learning that the way I do everything is garbage and there's a better way. If I ever learn that I'm a master at software development, I'll probably get depressed. But I don't think my inability to get to a better design via TDD is dunning-kruger, either.
I want to see the light.
Some of us would argue that you already have.
You're already doing several reasonable things that tend to improve results: using unit tests, being aware of dependencies, being aware of where your state is held. There is ample credible evidence to suggest that both using automated testing processes and controlling the complexity of your code are good things.
There is little if any robust evidence that adopting TDD would necessarily improve your performance from the respectable position you're already in. So do the truly agile thing, and follow a process that works for you on your projects. You can and should always be looking for ways to improve that process as you gain experience. But never feel compelled to adopt a practice just because some textbook or blog post or high-profile consultant advocated it, if you've tried it and your own experience is that it is counterproductive for you at that time.
One thing that helps keep me doing that: it's only with trivial problems that you know everything important up front. Accept that your domain will surprise you. That your technology will surprise you. That your own code will surprise you if you pay close attention to what's working well and what could be better.
All the details might not be filled in, and there are surely things I overlook from the high-up view, but for the most part I already envision the solution.
The design part of TDD is just the expectations. So if you were to test an add function for example, you might write something like
assertEqual(add(5,2), 7)
assertEqual(add(-5,2), -3)
assertEqual(add(5,-2), 3)
before actually implementing the function. So here the design is that the add function takes 2 arguments. That's it.For other things like classes, your expectations will also drive the design of the class -- what fields and methods are exposed, what the fields might default to, what kinds of things the methods return, etc. Your expectations are the things you saw in your head before you start coding. So it's pretty much the same as what you do already. The benefit of TDD is in knowing that you have a correct implementation and you can move on once things are green.
One thing that's easy to misinterpret is that TDD doesn't mean writing a bunch of tests before writing any code...That's pretty much waterfall development. TDD tends to work best with a real tight test-code loop at the function level.
If you can find me a more useful example on somewhere then please show it to me.
But, I'm here as a you-can-do-it-to. You might not think you want to but I'm so glad I DID manage to get there.
Feel free to ignore because I respect that everyone's experience differs. But the real problem is that there are few good step by step tutorials that teach you from start to competent with TDD. Couple that with the fact that it takes real time to learn good TDD practices and the vast majority of TDDers in their early stage write too many tests, bad tests, and tightly couple tests.
Just as it's taken you time to learn programming - I don't mean hello world, but getting to the competent level with coding you're at today, it'll take a long time to get good with TDD. My case (ruby ymmv) involved googling every time I struggled; lots of Stack Overflow; plenty of Confreaks talks; Sandi Metz' POODR...
Like the OP says - at different stages in the learning cycles you take different approaches because you're better, it's more instinctive to you. I thought I understood the purpose of mocks/doubles, until I actually understood the purpose of mocks/doubles. When used right they're fantastic.
The key insight that everyone attempting TDD has to grok, before all else, is that it's about design not regression testing. If you're struggling to write tests, and they're hard to write, messy, take a lot of setup, are slow to run, too tightly coupled etc. you have a design problem. It's exposed. Think through your abstractions. Refactor. Always refactor. Don't do RED-GREEN-GOOD ENOUGH ... I did for a long time. It was frustrating.
This is a good post. Don't dismiss TDD because you're struggling. Try to find better learning tools and practice lots and listen to others who are successful with it.
It's true that sometimes fads take hold and we can dismiss them as everyone doing something for no reason. But cynicism can take hold too and we can think that of everything and miss good tools and techniques. TDD will help you be a better coder - at least it has me. If your first response to this post was TDD is bullshit, give it another try.
That itself might be worth a ton to you.
This is my problem exactly, and I wouldn't say I have a design problem. My application is a Django app that return complex database query results. Creating the fixtures for ALL of the edge cases would take significantly longer than writing the code. At this stage it is far more efficient to take a copy of the production database and check things manually. It helps that my app is in house only, and so users will report straight away when something isn't working.
But to say that I have a design problem because tests are going to be difficult to implement is just plain wrong.
However, in a classical language it's easier to organize stuff into classes and for the purpose of a post like this one it's easier to convey. But you're dead on.
It's true that nothing forces your to refactor - but I think wanting that is a symptom of treating TDD as a kind of recipe-based prescriptive approach. It is not a reflection of the nature of TDD as a practice or habit.
It's a subtle difference, but important:
A recipe says "do step 3 or your end result will be bad"
A practice says "do step 3 so you get better at doing step 3"
Has anyone else solved this chicken and the egg dilemma?
IMHO junior programmers tend to think that over specifying a design helps them, only a master can recognize the brilliance of something like SMTP/REST/JSON over X400/SOAP/XML. TDD just helps them over specify their bad designs.
That said TDD is a wonderful tool in the hands of a master. It's like photography, a $10,000 camera won't help you solve your composition problems. Tech can help ensure Ansel Adams doesn't take a photo with the wrong focus, but a properly focused poorly composed image does not a masterpiece make.
I agree that teaching TDD exactly how I do it today can be a bit overwhelming from a tooling perspective currently, but conceptually I think visualizing it as a reductionist exercise with a tree graph of units is pretty simple.
They have to write the inputs and then the expected results.
It gets them thinking about the concept of using tests as part of the design practice.
Later, I give them the unit tests and they have to write the code. This is usually a rewritten version of a previous program so they see the text-based test plans in action as unit tests.
Then I might give them the empty test and an empty implementation, asking them to fill in the test first, then the implementation.
Finally I ask for a completely new feature, and they have to figure out how to write the test. And I ask them to go about it with a test plan.
After a few semesters of this, I think I'm ready to say that this is successful for getting the "beginners" there.
It doesn't address everything, but I think it's a good start.
I used to be an instructor for a living, and I kind-of equated lectures to waterfall and exercises to XP. There is even a semantically analogous term in teaching research, problem-based learning (each word corresponds to the respective word in test-driven development - cool, right?). Is there anyone else who sees these analogues, or am I completely crazy here?
As a developer that often prefers tests at the functional level, the primary benefit of tests for me is to get faster feedback while I am developing.
* The unit is no longer portable and can't be pulled from the context it was first used in (e.g. into a library or another app) without becoming untested. And adding characterization testing later is usually more expensive * A developer who needs to make a change to that unit needs to know where to "test drive" that change from, which requires that they know where to look for the parent's test that uses it. That's hard enough but it completely falls over when the unit is used in two, three, or more places. Now a bunch of tests have to be redesigned and none of them are easy to find. * Integrated unit tests like this lead to superlinear build duration growth b/c they each get slower as the system gets bigger. This really trips teams up in year 2 or 3 of a system.
I guess I haven't been involved in too many 2-3 year monolithic projects. Maybe that's when a stricter symmetrical unit test policy makes the most sense.
What other levels of tests do you end up running besides your unit tests? Do you have any integrated unit tests? Functional tests? End to end tests?
But one detail ran counter to my personal practice.
I don't believe that "symmetrical" unit tests are a worthy goal. I believe in testing units of behavior, whether or not they correspond to a method/class. Symmetry leads to brittleness. I refactor as much as possible into private methods, but I leave my tests (mostly) alone. I generally try to have a decent set of acceptance tests, too.
Ideally, you specify a lot of behavior about your public API, but the details are handled in small private methods that are free to change without affecting your tests.
This approach is not concerned with brittleness or being coupled to the implementation because each unit is so small that it's easier to trash the object and its test when requirements change than it is to try to update both dramatically.
Personally, I like my tests to be pretty clearly about the behavior of the contract, and not the implementation, which is hard when you require every method have a test.
I'd also be concerned that other team members are reluctant to delete tests - as this is a dysfunction I see often, and try to counteract with varying degrees of success.
When I'm teaching TDD, the kata I have everyone go through is a simple order system.
The requirements are something like:
A user can order a case of soda
The user should have their credit card charged
The user should get an email when the card is charged
The user should get an email when their order ships
If the credit card is denied, they should see an error message
(etc....)
This way they can think about abstracting out dependencies, an IEmailService, a ICreditCardService, etc. There are no dependencies for a Roman Numeral converter.
All his classes ended in "er".
he's not writing object oriented software, he's writing imperative software with objects.
The observation that "[s]ome teachers deal with this problem by exhorting developers to refactor rigorously with an appeal to virtues like discipline and professionalism" reminds me of E. O. Wilson's remark that "Karl Marx was right, socialism works, it is just that he had the wrong species."
If test-driven design were the programming panacea its proponents sometimes seem to make of it, Knuth would have written about it in TAOCP. Instead Knuth advocates Literate Programming. TDD seems to attract a cult-like following, with a relatively high ratio of opinion to cited peer-reviewed literature among proponents.
TDD as this is commonly understood seems to me like the calculational approach to program design (c.f. Anne Kaldewaij, Programming: the derivation of algorithms), only without the calculation and without predicate transformers. Still it can be a useful technique.
There is no "right" way to program. This was evident from the beginning, when Turing proved the unsolvability of the halting problem. (Conventions are another matter.)
I'd like TDD to be more than just another way to relearn those old rules, especially if we arrive at the same conclusions on a circuitous path. Perhaps the old design rules, object patterns, etc. have to each be integrated with a testing strategy, e.g. if you're using an observer you have to test it like this and if you refactor it like that you change your tests like so.
The general rules are easy to understand and your post makes perfect sense but once you formulate your new design approach you'll have to find a way to teach it precisely enough to avoid whatever antipattern is certain to evolve among the half-educated user community, which usually includes myself and about 95% of everyone else.
---------
I haven't designed code the way he's advocating, but I have attempted TDD by starting with the leaves first. Here are the downsides to that:
1) Sometimes you end testing and writing a leaf that you you don't end up using/needing.
2) You realize you need a parameter you didn't anticipate. EG: "Obviously this patient report needs the Patient object. Oh crap I forgot that there's a requirement to print the user's name on the report. Now I've got get that User object and pass it all the way through".
Maybe these experiences aren't relevant. As I said, I haven't tried to "Fake It Until You Make It".
So what? Just delete it. Your version control system should have a record of what it was if you end up needing to go back to it.
I still follow the old Code Complete method: think about the problem, sketch it out, then finally implement with unit tests. The results are the same, and it's a lot less painful than greenhorn-TDD.
So now, they reformalize exactly the so "rigid" ISO9001 they were trying to throw down.
What an irony.
But that 'test' word really puts people in the wrong frame of mind at the outset.
Thank you! Details of this post aside, this gave me an Aha! moment and I feel like I'm finally leaving the WTF mountain.
Isolating things is very important to make it easier to test, and lower the risk for tests to break when you change other parts of the system. Some times isolating one part from another is hard work. Typemock makes it easier, but in the same time it ties you closer to the part that you are trying to isolate from.
e.g. a database. You want to test something that eventually should store something in a database. You can either make a thin layer abstracting away your database so that you can test the functionality without depending on the database, or you can make a tighter coupling to the database, and use tools like typemock to get rid of it in test mode. If you want to change the way you store data, you now have production code tightly coupled to the current storage strategy AND tests tightly coupled to the current storage strategy...
Typemock can be of great help some times, but really you should strive to find better designs instead.