I opened up the demo page and generated the tests. On first glance, I see a bunch of tests that just verify that the given code does exactly what it was written to do. That misses the _entire_ point of tests. Oh, and only a single test case for each function.
Looking a bit more closely, the test cases wouldn't even pass. They're just filled with placeholder values. Okay, fine, the boilerplate is generated, but you have to fill in the expected returns.
What about mocks? Why can I only have one test case per function? Also, None of these test cases are documented. You're really still writing the bulk of the tests yourself anyway.
I'm not saying there is no value here, but if there is value, that demo shows none of it.
There is almost no use for unit tests, as they lock down function implementations without verifying functionality. They have some amount of use for a programmer to verify if what they wrote really is what they wrote, and maybe for data structure methods (not in Go of course), but that means maybe 1 in 50 methods justifies a unit test.
Everything else should be system tested, to see if the components fit together and if the interactions between various parts of the application really are what you think they are.
It's like javadoc back in the day. "Document all your function variables, you've only got them for x%", ... you write a parser that parses the app and simply adds javadoc everywhere, giving the obvious descriptions to obvious names. 2 hours of work for the generator, but actually interesting work, half an hour to read through it and change a few things, and boom, 30000 lines added in a day. And the worst part is, you've just made everything harder to read, but everybody's happy with you.
Unit testing is not an exercise in the mundane by making your manager happy, it is about validating that the thing you just wrote actually works, so that you can move on to the next thing, eventually aggregating all of that work together and constructing code that works!
As others have said, you test inputs and outputs (and maybe you need to test some failure modes depending on the complexity of the situation). When you're working on a parser, or lexical analyzer of any sort, wouldn't it be nice to know that it is capable of parsing what you thought it should?
System/BlackBox/Integration tests are too high-level and the more you rely on those as your sole testing, the more brittle and difficult you make it to track down the cause of test failures. It also means that you have to spend more time constructing test scenarios to testing the inner-workings of the system, things that are easier to do in unit tests. Bloated tests at that layer actually make it MORE difficult to refactor.
Decent unit tests help you define the inputs and outputs of your functions/classes/design. If it's hard to write a unit test on it, then you probably have side-effects in the code that you can't properly account for (bad code design). When it comes to a point that you need to refactor and the unit test is in your way, then guess what? refactor/delete the test...
This is...curious, given that every place I've ever been with any sort of testing culture tested contracts, not function implementations. They test the range and domain of the function, not the behavior of it.
Unit tests do not replace system-level/integration testing, but to dismiss them out of hand is manifestly unwise.
1) A smallish number of full end to end integration tests which actually make TCP requests to your app, and which actually have service talk to each other, etc.
2) A medium number of black box integration tests that just make HTTP requests to a single isolated service or component and verify that the response comes back out as expected (with no expectations on what happens in between input and output).
3) A much larger number of unit tests that are direct execution of business logic (just calling the code directly instead of calling the server externally) to verify various edge conditions in the inner underlying code.
To use a real world example you might have full end to end integration tests to ensure that your service allows the creation of a new account, and that when the account is created an email is dispatched, and that an authentication cookie is set which gives this new user the ability to make requests to the service.
This is just a high level test though. To back this test up you should have at least a couple hundred unit tests verifying that various edge cases with email formats, name formats, unicode characters in input, etc are handled correctly.
To speed up test suite execution time it doesn't make sense to run each of these hundreds of edge cases as full end to end integration tests, or you end up with a horrific test suite that will take minutes to run. So instead you should use unit tests to cover the underlying edge conditions. You can usually run a hundred unit tests in the amount of time that a single integration test will run.
I would disagree. You can change the implementation all you want, but you cannot change the functionality (contract). If your unit test verifies implementation rather than functionality, then it is a poor test.
I usually find such tests result from using mocks poorly.
Then i realised that this is a demo of a company that someone actually wants to make their fortune with: