When and When Not to Use Mocks

The first post about mocks covered the basics of how to use Python’s mock library. The Problem with Mocks explained the dangers of using mocks and advanced features of the mock library that can be used to minimize those dangers. Returning to the topic of mocks, this post discusses the complex question of when and when not to use them in your tests.

My references for this are J.B. Rainsberger’s Integrated Tests Are a Scam series and Gary Bernhardt’s videos about Boundaries and functional core, imperative shell.

It’s really about when to write isolated tests

Mocks aren’t an end in and of themselves. You don’t use mocks in your tests just for the sake of using mocks. Mocks are a means to an end, that end being isolated unit testing. The real question to ask, then, is when to write isolated tests and when to write integrated tests?

I’ll give my opinion on that question below, but first I want to explain what isolated and integrated tests are, and what their pros and cons are.

You can have isolated tests without mocks

It’s possible to design your code in such a way that you can write isolated tests without needing to use mocks by applying a “functional core, imperative shell” approach and using value objects as the boundaries between your units. When you can achieve fully isolated tests without mocks, that’s the best of both worlds - you get all the advantages of isolated tests (see below) and avoid the problem with mocks.

This, however, is an advanced technique that isn’t always easy to apply, that might not be applicable when you’re working with existing code, and that may lead to complaints from other developers that it’s esoteric and too different from the style of the rest of the codebase. So for the rest of the time, when you want to write isolated tests and the unit of code you want to test does have non-trivial dependencies, there are mocks.

Isolated tests vs integrated tests

Isolated tests (with or without mocks) have a lot of advantages over integrated tests:

Fallacies of integrated tests

There’s a couple fallacies about integrated testing out there that I think many programmers believe (consciously or not). I certainly used to believe in both of these:

First, that the way to write solid code that doesn’t have any bugs is to have a lot of tests to make sure there aren’t any bugs. The way to write solid code with few bugs is not by high test coverage but by good design. Isolated tests are the way to get that good design (and, incidentally, will also result in high test coverage). Test driven development is about good design, as much as it’s about testing.

Second, that if you just write integrated tests, tests of Foo using the real Bar instead of a mock Bar, then you know that it really works (whereas with a mock Bar you don’t really know, because the mock could be wrong).

It may be true that if you write an integrated test for “if x1 then y1” than you do know that that one particular thing really does work. But that’s just one tiny thing, it doesn’t guarantee that the system as a whole really works, what about “if x2 then y2”? Because of the combinatorial explosion in the number of integrated tests needed to cover all possibilities, you can’t write them all. If a test is missing, the tests could all be passing even though the code is wrong.

And do you even really know that “if x1 then y1”? What if your test code is wrong and doesn’t actually test what it intends to test? The tests could all be passing even though the code is wrong.

It can even be the case that, with an integrated test for Foo that uses the real Bar not a mock Bar, your “if x1 then y1” integrated test for Foo was correct once, but then Foos dependency Bar was changed in such a way that the test no longer tests what it was intended to, but still passes. Again, the tests could all be passing even though the code is wrong.

It’s true that mocks being out of sync with the real objects that they mock is one way for the tests to still pass even if the code is wrong. But this is just one of many ways in which if the tests are wrong or incomplete, then the tests could all be passing even though the code is wrong, and most of the ways that this can happen apply to integrated tests as well as isolated tests.

The lesson is, again, that it’s really good design that produces code with fewer bugs. Good design of the kind that makes it quick and easy to write fast, readable tests so that the tests will contain fewer mistakes and omissions.

When to write isolated tests and when to write integrated tests?

So isolated tests have many advantages over integrated tests. Let’s return to the question that we started out with: when to write isolated tests (and when to use mocks to do so, if necessary) and when to write integrated tests?

On this I agree with J.B. Rainsberger in Integrated Tests Are A Scam.

When to write integrated tests

For the most part, I think a good rule of thumb is that at the boundaries where your code touches external code that you don’t control - third-party libraries (especially complex ones such as database libraries), web frameworks, the standard library - you should probably test that edge code in integration with that external code, rather than trying to mock the external code.

One reason for this is that external code often has complex interfaces, mocking them would be complex and brittle, and since it’s not your code you can’t redesign the external code to simplify its interface (at least, not at the very edge where your code final touches the external code).

Another reason is that you want to test your understanding of the external code, to test that your code really does use the external code correctly, that that complex SQL query really does return what you want it to. External modules like database libraries are often complicated to use correctly.

One exception is when your code uses a third-party library that has real-world side-effects: sending emails or something like that. In those cases you do need a fake or mock of that library to test against.

When testing in integration with external code, you should design your code to encapsulate those external dependencies. Minimize the amount of your code that needs to be tested in integration with complex external dependencies and maximize the amount of your code that can be tested in isolation.

When to write isolated tests

In the core of your codebase, where you’re just dealing with your own modules depending on other modules of your own, you almost always want to test in isolation. But you should design your code so that you only need a few simple mocks, or if possible even no mocks at all, in order to test it in isolation.

One exception is when the module under test depends on a value object that’s so simple that it doesn’t need mocking, simple enough that the advantages of mocking and the disadvantages of integrating don’t come into play. In that case, of course, you don’t need to mock it. Knowing the pros and cons of mocking vs integrating (see above) will enable you to decide when to do this.

Personally, I’d place a pretty strict definition on whether something qualifies as a value object for this purpose: it should be an object that’s under our control (not one that comes from an external codebase), it should be a simple (not complex or deeply nested) data object, with possibly a few very simple “computed property”-type methods (very simple nothing in, value out code that does nothing complex, operates only on the object’s internal data, takes no arguments, and has no side effects).

Listen to what your bad mocks are telling you

The problem with mocks getting out of sync with the real code that they mock and causing false-positive test passes isn’t a big problem if you only have a few simple mocks. It’s at its worst when you have a lot of complex and deeply nested mocks.

I already mentioned this above but it’s worth repeating another way - having a lot of complex mocks in your tests is a problem that you shouldn’t put up with. But it’s a symptom rather than a cause. The cause of numerous complex mocks could be one of two things:

  1. Choosing to write isolated tests when you should be writing integrated tests. If the complex interface that you’re mocking is a third-party library, then maybe you should be testing in integration with that library instead. Or:

  2. Bad code design. If the complex interface that you’re mocking is part of your own codebase, then maybe your code is badly designed and you should change it so that it can be tested in isolation with only a few simple mocks, if any.


That’s my opinion about when to use isolated tests and mocks! Admittedly, mostly formed by listening to what others have said. If you’ve made it this far I’d really encourage you to watch Integrated Tests Are A Scam and Boundaries.