Testing Through Different Lenses

08 Jun 2012 4 min read

Testing isn't a single hurdle to clear - it's a series of lenses, each revealing different truths about your app. From the microscopic focus of unit tests to the broad perspective of acceptance testing, each level builds confidence differently, and together they form the ecosystem of !trust that lets us ship with confidence.

A photo of a lens focusing on one particular point off into the distance

This post will explore each level - its scope, it's purpose, the risks it mitigates and who's responsible for it.

Bringing the Testing Levels into Focus

Unit Testing

Scope: Individual pieces of functionality.
Purpose: Verify the smallest pieces behave correctly.
Risk Mitigated: Catches regressions early, prevents logic errors from spreading.
Who's responsible: Developers.

Unit testing verifies that, given a particular app state, a specific and isolated part of your code behaves as expected. A unit can be defined in many ways, but the smaller the better. In iOS, a single method is often treated as the unit. Each test acts as a written contract: the unit must satisfy its conditions for the test to pass. If the code contains multiple branches (e.g., if/else), you'll need multiple tests to cover each path.

Unit tests are organised into regression suites within a dedicated test target. These suites can be grouped by focus - for example, all networking tests in one suite - so you can run them on demand to confirm that new changes haven't broken existing functionality.

Each unit test should run in complete isolation. It must set up its own prerequisites - such as test data, test doubles, or user permissions - and clean up afterwards so no shared state leaks into other tests. This discipline ensures that unit tests are repeatable, reliable, and provide fast feedback when something goes wrong.

Now let's look at the next level up.

Integration Testing

Scope: Multiple units.
Purpose: Ensure components work together.
Risk Mitigated: Prevents integration drift, surfaces mismatched assumptions between modules.
Who's responsible: Developers.

Integration testing verifies that multiple units of functionality work together as intended. While the structure of an integration test often resembles a unit test, the key difference is that the interaction between components is real - test doubles are used sparingly, if at all. The focus is not on the internal logic of each unit (that's the job of unit tests), but on whether their collaboration produces the expected outcome. In other words, integration tests assume each unit is correct and concentrate on the seams between them.

Like unit tests, integration tests are organised into regression suites within a dedicated test target. It's tempting to mix unit and integration tests since they look similar, but keeping them separate avoids confusion and ensures you can run the right level of checks when needed.

If you want to share test doubles between Unit and Integration test targets, check out Hitting the Target with TestHelpers for details on how to do so.

Like unit tests, each integration test should still run in isolation. It must set up its own prerequisites - such as test data, limited use of test doubles, or user permissions - and clean up afterwards so no shared state leaks into other tests. This discipline makes integration tests repeatable and reliable, even though they operate at a broader scope than unit tests.

Now let's look at the next level up.

System Testing

Scope: Whole app.
Purpose: Validate end-to-end functionality.
Risk Mitigated: Identifies environment issues, ensures workflows behave as expected.
Who's responsible: Testers (with developers in support).

System testing verifies that the app works as a complete system, not just as a collection of parts. It focuses on end‑to‑end functionality and non‑functional requirements such as performance, reliability, and usability. System tests often mirror real business processes - for example, creating an account, making a purchase, or completing a workflow from start to finish. These tests are typically carried out by dedicated testers with minimal input from developers. System testing is most often associated with manual testing, though automated end-to-end tests are increasingly used to complement it.

System testing is often the first time code from different teams meets in a shared environment, so the environment itself must also be tested, not just the expected interactions. As such, the test environment should closely mirror what production will look like. If it doesn't, you risk the classic problem of “it worked on the tester's device but not for end-users.”

System tests are longer-lived than unit or integration tests, but each should still run independently. They must set up their own prerequisites - such as dedicated test accounts - and clean up where possible. However, because system tests often run in shared environments, strict isolation isn't always achievable. This looser control can make results less deterministic, so failures at this level require deeper investigation to distinguish between code issues and environment issues.

Now let's look at the next level up.

Acceptance Testing

Scope: Whole app.
Purpose: Confirm the product meets requirements and is ready for release.
Risk Mitigated: Avoids misaligned expectations, ensures the product is fit for acceptance.
Who's responsible: Stakeholders.

Acceptance testing validates that the app meets the agreed requirements and can be accepted as functionally complete by its stakeholders. While system testing checks the product against specifications, acceptance testing checks it against stakeholder expectations. It often overlaps with system testing, but the emphasis here is on whether the product is good enough to release.

Acceptance tests are usually carried out by an independent test team, business stakeholders, or selected end‑users. The testing environment should mirror production as closely as possible, often with imported live data. In some cases, acceptance testing may even be performed in the production environment itself.

Two common forms of acceptance testing are Alpha and Beta:

Alpha

Alpha testing takes place in a controlled environment, often at the development site. End‑users are invited to perform real tasks with the app. This helps uncover hidden assumptions, misinterpretations of requirements, or unexpected usage paths.

Beta

Beta testing is the most widely recognised form of acceptance testing. The app is released to a sample group of end‑users in their own environments, where they use it as part of their normal workflow. This provides valuable feedback on real‑world performance and usability before a full release.

The Full Focus

From unit to acceptance, each level is a different lens. No single lens shows the whole picture, but together they form the ecosystem of trust that lets us ship with confidence.

What do you think? Let me know by getting in touch on Mastodon or Bluesky.