Was part of a team some time ago where we had 100% FE and BE test coverage. Good suit of E2E tests, functional tests and contract tests. In most cases it took longer time to get the test coverage done than to write the code itself.
Overall it was tremendously valuable as it becomes pretty trivial to refactor the code or even move to a new major library version that was core of the architecture. I cannot imaging nowadays without good tests suit as code needs to evolve and you want to sleep at night as well.
Still, you'll never cover the complex business logic and interactions between many many different services in a large complex system that can be only tested doing manually by a person who knows/looks the domain in a higher level.
Bugs still appeared in the RC, live environment when all the systems started acting together.
From my experience 80/20 rule is pretty good one to have. Getting it to 100% meant to run the code coverage analyser and looking at all the code paths that are not yet handled that in many cases were code paths that never caused us problems in live and were not even crucial when doing major refactoring as it was already covered by the 80%.
Looking back at least for that specific project, I'd say having little bit less coverage would have been much better and would have enabled us to move tad bit faster and test new features on the field to validate if they benefit the business or not.
Nowadays I am leaning towards having a pragmatic view on writing tests.
We've definitely got some test suites that slow us down, but...
...those suites would not pass review elsewhere in the codebase. Near as I can tell, most of the time that it's the case "our tests slow us down" is because the test code isn't held to the same standard (ie, the standards are massively relaxed)
That all said, what I actually want to ask -
In my head, having a good test suite - particularly a BDD-style one, like Cucumber tests - mean that it's easy to add tests to cover things uncovered from manual QA.
Have you found that to be the case? Or, have you found that it could be the case, if the test suites were different in some way?
> 80/20 [and not 100%]
Totes. I'm actually surprised that I've been able to hot 100%; lately it's been like 95% after I bang out the obvious tests, and then there's like one branch that's missing and it's easy to add. If/when it's hard to get that last 10%, totes agree - don't.
Overall it was tremendously valuable as it becomes pretty trivial to refactor the code or even move to a new major library version that was core of the architecture. I cannot imaging nowadays without good tests suit as code needs to evolve and you want to sleep at night as well.
Still, you'll never cover the complex business logic and interactions between many many different services in a large complex system that can be only tested doing manually by a person who knows/looks the domain in a higher level.
Bugs still appeared in the RC, live environment when all the systems started acting together.
From my experience 80/20 rule is pretty good one to have. Getting it to 100% meant to run the code coverage analyser and looking at all the code paths that are not yet handled that in many cases were code paths that never caused us problems in live and were not even crucial when doing major refactoring as it was already covered by the 80%.
Looking back at least for that specific project, I'd say having little bit less coverage would have been much better and would have enabled us to move tad bit faster and test new features on the field to validate if they benefit the business or not.
Nowadays I am leaning towards having a pragmatic view on writing tests.