I've been programming for about four years now. I was taught on the job by a friend, and I've always learned to make one small piece before connecting everything
For example, if I need to make a simple 2D game, I put together the rendering function, then a separate function for handling key-events etc.
The gist is that I don't really have a blueprint on paper, it's all in my head, the blueprint comes together when I glue everything together.
I've been taking an introductory programming course and it's been asking me to jot out every function I need, all the variables, and write all the tests before hand before implement the actual code. I've been finding this challenging. I've not found a case where writing tests actually helped with writing the program. For me, it's been a hindering process. Am I doing something wrong here?
Your programming course seems to be discussing an extreme and possibly excessive version of TDD. I have written a lot of code, and I like TDD, but in moderation. I certainly do not test every single function using TDD. I do not work out my functions in advance. However, I do take care to design and document major interfaces between components. I will write tests against those interfaces. For non-trivial parts of my code, I will write tests for smaller components. I do not take it all the way down to every function, that's nuts.
Now these tests do take time to write. And if you measure productivity by lines of code written per unit of time, well your number will go down. But that's the wrong measure. TDD increases my confidence that the tricky bits of my code are correct. It definitely catches bugs early, which is a huge productivity benefit. (And when I encounter a bug not caught by TDD, that I catch later, I will always write a new unit test to reproduce the problem.)
TDD also frees me up to clean up and refactor my code, and add new functionality. Since I have code that passes a lot of tests, I can change code with confidence because I know that the tests will pick up nearly all breakage caused by my change.
I will say that TDD has its limits. I find that TDD has been a dismal failure when the thing I'm testing has a dependency on some complex external thing, e.g. a service or a database. If you try to "mock" that external thing, you end up wasting tons of time debugging your mock database (for example). Also, external things with state (like services and databases) don't really fit TDD which depends on fast setup and teardown. Once you have these external dependencies, you are better off writing system tests.
In postgres at least, Wouldn't your framework create a db transaction that then is rolled back at the end of the test?
Tools like mockito can make a big difference.
I worked on a project which was terribly conceived, specified, and implemented. My boss said that they shouldn't even have started it and shouldn't have hired the guy who wrote it! Because it had tests, however, it was salvageable, and I was able to get it into production.
This book
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
makes the case that unit tests should always run quickly, not depend on external dependencies, etc.
I do think a fast test suite is important, but there are some kinds of slower tests that can have a transformative impact on development:
* I wrote a "super hammer" test that smokes out a concurrent system for race conditions. It took a minute to run, but after that, I always knew that a critical part of the system did not have races (or if they did, they were hard to find)
* I wrote a test suite for a lightweight ORM system in PHP that would do real database queries. When the app was broken by an upgrade to MySQL, I had it working again in 20 minutes. When I wanted to use the same framework with MS SQL Server, it took about as long to port it.
* For deployment it helps to have an automated "smoke test" that will make sure that the most common failure modes didn't happen.
That said, TDD is most successful when you are in control of the system. In writing GUI code often the main uncertainty I've seen is mistrust of the underlying platform (today that could be, "Does it work in Safari?")
When it comes to servers and stuff, there is the issue of "can you make a test reproducible". For instance you might be able to make a "database" or "schema" inside a database with a random name and do all your stuff there. Or maybe you can spin one up in the cloud, or use Docker or something like that. It doesn't matter exactly how you do it, but you don't want to be the guy who nukes the production database (or a another developer's or testers database) because the build process has integration tests that use the same connection info as them.
https://www.amazon.com/Working-Effectively-Legacy-Michael-Fe...
Mocking the database system is what I was referring to. Any two database systems have enough difference in datatypes, precise transaction semantics, default behaviors, and language dialect that it just isn't worth the effort. Add your actual database to your test, and just deal with the consequences.
Or at least to write an integration test to mimic the manual test so that you can refactor without having to correctly remember what manual test you did?
The flip side of all of this is of course that adding too much bureaucracy up front might have killed much of the initial developing velocity the project had and we might not have come as far as fast.
If you are generally someone who works more from a "bottom-up" approach, then having to define all of the functions/variables beforehand can be frustrating. This is especially true since you may not have enough experience with a larger project that requires this kind of blueprint, so you don't know what parts are important to plan out beforehand and which are just boilerplate that can be worked out later.
TDD actually works okay with "bottom up" in that you write a new test as you are making each new function or other piece of code. However, it can lead to missing important test coverage if you don't think about the big picture as far as what the software needs to be doing. TDD can also work with "top down" approaches that are useful when planning larger software, in that it gives you something to shoot for while filling out the functions you defined at the start. However, trying to learn both TDD/Unit testing and enforcing a top-down approach to solving the problem seems like a recipe for frustration.
Advice: Ignore the testing to begin with. Start with each step the software needs to take to solve the problem -- in comments, on paper, whatever. Then break these down into functions as much as possible. They can be very ugly, multiple variable messes if you want, the key is to try and master a more "top down" approach before worrying about testing.
But it kinda depends on the language. I.e. if you use strongly typed thing, you can encode things into your types that then don't need to be tested as much.
Or, in languages that have repl, I often prototype most of the thing I need to do in some sort of a scratch pad, and I started few succesfull internal tools in python or nodejs with first thousand lines in repl.it, jsfiddle, or ipython/jupyter notebook. Only after I see my initial idea seems to be working, I take apart the code into libraries and conserve them by tests.
What helped me, and what might help you in this regard, is trying to find a way how to get to the part where you are able to test out the piece of code you are working on as soon as possible.
I.e. don't wait with testing your render function until you finished the key-handling. And writing the test-cases for that piecewise functionality is usually fastest way to get there. For me, firing up i.e. ipython usually is eve faster, so that is how I start :-)
The real point of TDD (IMHO) is that writing the tests forces you to use the functions, and therefore shows you the places where the functions are hard to use. You're supposed to pay attention to pain. When a test is hard to write, it's telling you that the functions are hard to use. Listen to that. Change the functions so that they're easier to use. That feedback loop is what TDD is all about.
Also, when I did TDD, we wrote the test, wrote the non-functional skeleton of the function, ran the test, watched it fail, then implemented the function, ran the test, and watched it pass. That is, a function and the tests for it were written at the same time - tests first, but the function was written within, say, five minutes or less. Writing all the tests first? Um, no. Keep the feedback loop tight.
TDD doesn't work well at all when you're not sure what you're building yet because having to constantly refactor your tests eats up time. You might be better writing a quick and dirty prototype first and then trying TDD.
> I've not found a case where writing tests actually helped with writing the program. For me, it's been a hindering process. Am I doing something wrong here?
For what it's worth, when I was taught TDD I found TDD proponents very frustrating to talk to when I wanted an honest evaluation of the pros and cons. Several would refuse to admit any downsides to TDD and instead push back that you just weren't getting it yet. There's no silver bullet.
2. Even TDD done right has the most productivity benefits in the long run not short term. Your code base will be more maintainable because it will be more decoupled and you can refractor with confidence. In the short term you are faster without it.
Edit: I cannot recommend reading "Growing Object Oriented Software Guided by Tests" enough.
I also recommend watching some of Justin Searls stuff.
For a personal project I don't think it's all that helpful.