How To Win Over a Unit Testing Sceptic

01 Jun 2012 10 min read

Convincing a developer to write unit tests - especially if they've never done so before - can feel impossible. The pushback is often visceral. Like Sisyphus endlessly pushing that boulder uphill, you're met with the same objection every time:

"Writing unit tests will only slow me down. The user won't get the feature they need."

That instinct is worth taking seriously - and worth answering.

Painting of Sisyphus struggling to push a boulder uphill

This post will explore common objections that a sceptic could raise, the motivation behind each one, and how they can be overcome.

For details on unit testing as a process, read: How Unit Tests Protect Creativity and Speed Up Development.

Objections You'll Hear from a Sceptic

1. Good Developers Don't Need Tests

May also Sound Like:
  • Only junior developers need to write tests.
  • Tests are a crutch for people who don't know what they're doing.
Motivation Behind the Objection

That writing tests is an admission of fallibility, and a sufficiently skilled developer should be able to produce correct code without needing a safety net.

Rebuttal

This objection treats testing as a confession of weakness, when it's actually a tool that shapes the code being written.

Development techniques like Dependency Injection, Isolation and Single-Responsibility Principle are all encouraged by adopting unit testing because they make unit testing easier. A unit test is as much a design tool as it is a verification tool.

How to Prevent the Objection From Coming True
  1. Make test writing a normal part of how the team produces code, not an optional add-on.
    1.1. Include tests in the definition of done for every feature ticket.
    1.2. Treat a pull request without tests for new behaviour the same as a pull request without the behaviour itself - incomplete.
  2. Frame tests as a design tool.
    2.1. Treat tests that are difficult to write or read as a code smell.
    2.2. Examine the sut implementation and look for ways to simplify it.
  3. Lead by example.
    3.1. The most experienced developers on the team should be the most visible test writers.
    3.2. When a senior developer writes tests for their own code, it removes the framing that testing is something only junior developers need.

2. I Can Just Test the App Myself

May also Sound Like:
  • I wrote the code, I know what it does.
  • I don't need a test to tell me whether it works.
Motivation Behind the Objection

That the developer's personal expertise in how the app works is a sufficient substitute for an automated test suite, and the time spent making the code testable would be better spent shipping features.

Rebuttal

This objection assumes the sceptic's mental model of the code will still be accurate when the next person needs to change it.

Knowledge Silos are dangerous to the health of any organisation - and the sceptic at the centre of the silo is rarely spared the consequences. A mental model of a codebase decays with time, and the questions colleagues will eventually ask about code the sceptic wrote six months ago are the same questions they'll be asking themselves when they come back to it. Unit testing helps to prevent knowledge silos by acting as a queryable, runnable form of documentation within a project - unit tests provide a way to distribute understanding of the code in a way that describes what the code should do rather than just what it does.

From a purely selfish point-of-view, a unit test suite reduces the demands on the sceptic as some of the questions that a new developer might have needed to ask can now be answered by looking through the unit test suite.

How to Prevent the Objection From Coming True
  1. Treat unit tests as the canonical record of how the sut is expected to behave.
    1.1. When a developer makes a behavioural decision about a method, the test for that method should capture the decision.
    1.2. When the expected behaviour changes, update the test in the same change as the code.

3. We Have Testers for That

May also Sound Like:
  • Unit testing isn't a developer's responsibility.
  • That's not my job - we have a test team for a reason.
  • Why am I doing the testers' work?
Motivation Behind the Objection

That all testing is interchangeable, and since the team already employs people whose job is to test the app, any additional testing by developers is duplicated effort.

Rebuttal

This objection treats unit tests and manual tests as substitutes, when they're complements that answer different questions.

A unit test suite is great at checking that logic works as expected. Humans are great at the exploratory, judgment-based testing that a test suite never will be - the weird interactions that only show up when a real person pokes at the app sideways, the usability issues no automated check can spot.

A team that forces the logic-checking style of testing onto manual testers gives them a lower-quality app to test and reduces the time available for exploratory work.

The app is of lower quality because code changes get merged without any automated check that the logic still works.
The exploratory time is reduced because testers end up spending it on mechanical regression checks.

How to Prevent the Objection From Coming True
  1. Run the full unit test suite before any build is handed to manual testers.
    1.1. Treat a build with failing unit tests as not ready for manual testing, the same way a build that won't compile isn't ready.
    1.2. This protects the testers' time by ensuring they only test builds that have already cleared the mechanical regression bar.

4. The Tests Will Just Rot Anyway

May also Sound Like:
  • Maintaining more code is painful.
  • I've seen test suites rot before.
  • Tests always end up commented out.
Motivation Behind the Objection

That because unit tests are not production code, they are outside the project and so will be forgotten about.

Rebuttal

This objection describes a real failure mode, but misattributes the cause - tests rot because teams let them, not because rot is intrinsic to testing.

Rot is caused by neglect. It isn't enough to write a unit test suite and then stick it in a corner to only be executed when someone remembers. An effective unit test suite has the same weight as production code. To avoid rot, adopting unit testing needs to be accompanied by workflow changes within the team to ensure that when a unit test fails, it can't be ignored.

How to Prevent the Objection From Coming True
  1. Make the test suite an automated part of the development process.
    1.1. Run the full suite in CI on every commit.
    1.2. Configure the pull request gate so that a failing test blocks the merge.
    1.3. Surface test failures in the same place where the team already looks for build failures.
  2. Establish a team norm that broken tests are either fixed or deleted, never disabled.
    2.1. A disabled test is worse than no test - it creates the illusion of coverage where none exists.
    2.2. If a test can't be fixed in the same change that broke it, raise a ticket and treat it with the same urgency as a build failure.
  3. Treat test code with the same care as production code in code review.
    3.1. Review test changes for clarity, intent, and coverage of edge cases.
    3.2. Push back on tests that assert on implementation rather than behaviour, since those are the ones most likely to rot first.

5. Every Time I Change the Code, I Have to Fix Ten Tests

May also Sound Like:
  • Tests will make refactoring harder!
  • I spent the morning fixing tests instead of shipping features.
  • Half my test-writing time is maintenance, not new tests.
Motivation Behind the Objection

That unit tests are inherently brittle, and the cost of maintaining them under change outweighs whatever safety they provide.

Rebuttal

This objection describes brittle tests, not unit testing.

Unit testing isn't a silver bullet. Like any code, unit tests can be well written or poorly written.

The instinct of a new test writer is to assert on everything they can see, including internal state, the order in which methods are called, and the exact return values of helper functions the unit happens to use. Tests written that way feel thorough but break at the first refactor - they've encoded the implementation rather than the contract. Learning where that line sits is one of the core skills of writing tests well. Every developer who writes robust tests today went through a phase of writing brittle ones.

How to Prevent the Objection From Coming True
  1. Write tests against the external behaviour of the unit, not its internals.
    1.1. Assert on return values, observable side effects, and state visible to callers.
    1.2. Avoid asserting on private methods, internal state, or the specific sequence of calls a unit makes to its collaborators.
  2. When a refactor breaks a test, ask whether the behaviour changed before changing the test.
    2.1. If the behaviour didn't change and the test broke anyway, the test was over-specified - rewrite it to assert on the contract, not the implementation.
    2.2. If the behaviour did change, the test is doing its job - update it to reflect the new contract.

6. Tests Force Me to Write Code That's Harder to Read

May also Sound Like:
  • I have to jump through three files to find out what a method does.
  • Why can't I just call the singleton directly?
Motivation Behind the Objection

That the legibility cost of making code testable is too high to be worth the testing benefit.

Rebuttal

This objection mistakes hidden complexity for simplicity - code that looks self-contained often isn't, and unit testing makes that dishonesty visible.

Supporting unit testing often means changing the shape of production code. Key techniques that support unit testing, like Polymorphism and Dependency Injection, introduce more indirection into the code than would otherwise be there. At first glance, this additional indirection makes the code harder to read - code that doesn't control its dependencies often ends up with everything contained in the same method or class. It looks self-contained when it isn't. Consider this method that produces a formatted string containing the status of a download:

- (NSString *)formatDownloadStatus
{
    return [NSString stringWithFormat:@"Download Status: %@", [Downloader.shared downloadStatus]];
}

You can read this top to bottom without jumping anywhere - which is exactly the property the sceptic is defending. But the method has a hidden dependency on Downloader, on its singleton nature, and on whatever state Downloader.shared happens to be in when this gets called. The method looks simple because the complexity has been pushed somewhere you can't see from here. And because Downloader isn't passed in, there's no straightforward way for a test to control what [Downloader.shared downloadStatus] returns - which means there's no way to test what formatDownloadStatus returns either. The hidden dependency that makes the method look clean is the same thing that makes it untestable.

A refactored version using Dependency Injection takes the status as an argument instead:

- (NSString *)formatDownloadStatusFor:(NSString *)downloadStatus
{
    return [NSString stringWithFormat:@"Download Status: %@", downloadStatus];
}

This version is harder to read in isolation - you have to look at the caller to see where downloadStatus comes from. But the dependency is now part of the method's signature instead of being buried in its body. Nothing has been added; something that was always there has been made visible. The trade the sceptic is being offered is a small, recoverable cost in local legibility for a large, compounding gain in honesty about what the code depends on.

How to Prevent the Objection From Coming True
  1. Pass dependencies into a unit rather than letting it reach for them.
    1.1. Constructors, method parameters, and protocol-typed properties are all valid ways of injecting a dependency - whichever fits the unit's lifecycle best.
    1.2. The goal is that the dependencies a unit uses are visible at the boundary of the unit, not buried in its body.
  2. Treat any singleton call inside a method as a hidden dependency that the method's signature is failing to declare.
    2.1. When you spot one, ask whether the singleton can be passed in instead.
    2.2. If it can't, accept that the method has dependencies its signature isn't honest about, and that the cost will be paid in testability.

7. What's the Point if Bugs Still Ship?

May also Sound Like:
  • Tests pass, and the app is still broken.
  • The suite gives us false confidence.
Motivation Behind the Objection

That a test suite which lets any bug through has failed at its job, and a practice that can't deliver correctness isn't worth the cost.

Rebuttal

This objection holds unit tests to a standard no form of testing meets. No amount of testing can guarantee that any reasonably complex app is bug-free - a team with a hundred QAs will ship bugs. If "bugs still escape" disqualifies a practice, every testing practice is disqualified.

A green unit test suite only means "what was checked behaved as expected for a given input". It doesn't mean the app works overall.

What a unit test suite does give you is a narrower search space when a bug appears. A failure in the suite points at a specific unit; a bug that escapes the suite points at the seams between units, or at a layer the suite was never meant to cover.

Unit tests are one layer of a complete testing strategy, not the whole stack.

How to Prevent the Objection From Coming True
  1. When a bug escapes, classify it before reacting to it.
    1.1. If a unit test should have caught it, the gap is in the suite - add the test.
    1.2. If no unit test could have caught it, the gap is in the testing strategy - add coverage at the layer that would have.
  2. Write the failing test before the fix.
    2.1. The test reproduces the bug and proves the fix works.
    2.2. The same test then permanently guards against the bug returning, so every escape makes the suite sharper rather than weaker.

If the above rebuttals have fallen on deaf ears, it's time to show, not tell. Start small: pick a method with a few if and else branches, and write a single test for the happy path. No test doubles. No complex setup. Just verify that for a given input, you get the expected output. It's hard to argue against a well-crafted unit test.

So Is It Worth The Struggle?

Convincing someone to start unit testing may feel like pushing that boulder. But unlike Sisyphus, you're not cursed to push forever. The hardest part of overcoming unit testing scepticism is getting that first, fully-supported unit test into the project. Once the test is in and the sceptic has to argue against concrete value rather than an abstract practice - then there will only ever be one winner.

Read "How Unit Tests Protect Creativity and Speed Up Development" for how to write that first unit test.

What do you think? Let me know by getting in touch on Mastodon or Bluesky.