The author seems to believe people either mock everything or don't mock anything. Obviously using mocks for all your tests is a very bad idea, but that's not how things are done generally.
Unit tests allow you to validate a unit's behavior very quickly. If your unit test takes more than 1 second to run it is probably a bad unit test (some would argue 1/100 second max so your whole unit test suite can complete in a few seconds). In unit tests you use mocks not only to keep the test hermetic, but also to keep the execution time as low as possible.
Then you should have integration & e2e tests where you want to mock as little as possible, because you want a behavior as close as production as possible. For those you care less about how long they take. That's because you usually don't run those tests at the same stage as unit tests (development vs release qualification).
The author does not make the distinction between different types of testing, the resulting article is of pretty poor quality imho.
I've certainly seen people who mock almost everything to test units at the smallest scale possible because they think that's what they're supposed to.
E.g., I once saw someone test a factory method like:
def make_thing(a, b, c):
return thing(a, b, c)
with a unit test where they mocked `thing`, and ensured that calling `make_thing(a, b, c)` ended up calling `thing(a, b, c)`.
They write just a shit ton of tests like this for every single method and function, and end up writing ~0 tests that actually check for any meaningful correctness.
harkens back to the early obsession with "100% code coverage" and java robots were coding tests on bean getters/accessors.
100% code coverage was a bad breadth-first metric when unit tests should be depth based on many variant inputs. Also, "100% code coverage" ignores the principle that80% of execution is in 20% of the code/loops, so that stuff should get more attention than worrying about every single line being unit tested.
Well, unless you were in some fantastical organization of unicorn programmers that had an infinite testing budget and schedule...
A good exercise is to get 100% coverage for anything that uses ByteArrayInput/OutputStreams. The language enforces handling IOException for a bunch of methods that could throw one for a generic stream but never for a ByteArrayStream.
You should see the opposite of this. Where every module of code is unit testable with zero mocks and just a small subset of untestable IO functions packed in a neat corner.
I've seen a lot of tests where people just mock everything by default without thinking. Smart programmers at a good company. It's an issue that does deserve more recognition. Abuse of mocks is bad for tests.
I know which company you are talking about :). I agree that abuse of mocks is bad for tests 100%. But when I clicked the link I was hoping to read an article giving a nuanced description of mocks, with some analysis on when to use and when to avoid mocks. Instead the article is just an opinion piece that just says "Stop using mocks" as if that was actually an option.
>The author seems to believe people either mock everything or don't mock anything.
The author is saying that people frequently mock things that it would be more economic to just run because you've got the real thing right there. Building a model for it is an expensive waste that probably won't even match reality anyway and will demand constant maintenance to sync up with reality.
If you're overtly concerned with the speed of your test suite or how fast individual tests run then you're probably the kind of person he's talking about. Overmocking tends to creep in with a speed fetish.
When I am developing a feature, I want to know very fast whether or not my code's logic is correct. It is not rare during the development cycle to run the same test dozens of times because I made a silly mistake (or a few), and obviously if the test takes 30 minutes to complete it completely wastes my day of work.
Having a set of very fast running tests is absolutely necessary in my opinion.
Once I have validated that the piece of code I wrote is doing what I intended, then I want to run other tests that do not use mocks/fakes, e2e tests that can possibly take a whole day to complete and will allow me to see if the whole system still works fine with my new feature plugged in. But this comes AFTER fast unit tests, and definitely cannot REPLACE those.
This sounds exactly right to me. You write mocks for the things that could take too much time to run frequently with the real code. (And I'm assuming you'd also write it for things that you don't want to make actual changes somewhere, such as a third-party API that you don't control.)
But if it could be run locally, quickly, you wouldn't bother mocking it.
If that's all correct, I think you and I would do the same things. All the people screaming "no mocks!" and "mock everything!" are scary, IMO.
Mocks mean your code is too tightly coupled. You should be able to unit test your code by creating only fake data.
Things like dependency injection increase coupling to the point where you have to mock. Avoid dependency injection and other complexity within complexity features.
Unit tests allow you to validate a unit's behavior very quickly. If your unit test takes more than 1 second to run it is probably a bad unit test (some would argue 1/100 second max so your whole unit test suite can complete in a few seconds). In unit tests you use mocks not only to keep the test hermetic, but also to keep the execution time as low as possible.
Then you should have integration & e2e tests where you want to mock as little as possible, because you want a behavior as close as production as possible. For those you care less about how long they take. That's because you usually don't run those tests at the same stage as unit tests (development vs release qualification).
The author does not make the distinction between different types of testing, the resulting article is of pretty poor quality imho.