A Systematic Approach to Unit Test Identification
The critical part of unit testing isn't how you write the test - it's how you identify what needs testing in the first place. Too often, developers rely on intuition to decide what deserves a test and what doesn't. This intuitive approach leads to inconsistent coverage, missed edge cases and bugs slipping in where they should be getting caught.
Identifying test cases isn't an art form that requires years of experience to master - it's a skill.
In this post, we'll replace relying on intuition with systematic process. Through an outside-in approach, you'll learn exactly where to look for test cases and how to extract them methodically - turning guesswork into confidence. Once we've identified the test cases, we will be ready to turn them into unit tests.
If you are unsure what a unit test is, read
Testing Through Different Lenses
first, then come back here and continue.
Start from the outside and move inwards
1. Read the specification
If you have a specification about the functionality you want to test, this is your best source for identifying what unit cases you will need to write. What makes the specification such a powerful source of test cases is that it exists outside of the code. When writing unit tests, there can be a danger of writing them for how the code is and not for what the code should be. Basing your test cases on the specification removes that danger altogether.
If you don't have a specification, this section is still useful as the techniques described below can be applied directly to code as well.
Let's look at an example specification for validating a username
during registration:
Username validation rules:
a. Usernames must be between 2 and 24 characters in length, inclusive.
b. Usernames must start with a letter.
c. Usernames must only contain alphanumeric characters.
d. Usernames are case-insensitive.
e. Usernames must be unique across all registered usernames.
If you have a specification that isn't presented as a bullet-point list, I recommend turning what you have into one. Lists are much easier to scan than a block of prose.
That's a pretty good specification. Let's work through each rule in order and identify what test cases are needed for each.
A. Usernames must be between 2 and 24 characters in length, inclusive
This rule is to control the acceptable length of a username.
As this rule is about the acceptable and unacceptable length of input, we can employ Equivalence Partitioning
and Boundary Value Analysis
to root out those test cases.
Equivalence Partitioning
is a software testing technique that divides input data into logical groups or partitions where the system is expected to behave similarly. Instead of testing every possible input, you test one representative value from each group - because if one passes, the rest are likely to pass too.
Why it's useful:
- Reduces test cases while maintaining coverage.
- Helps identify missing validation logic.
For the above rule, we can identify three logical groups:
Group | Range | Representative Value |
---|---|---|
Valid | 2...24 | testusername |
Invalid (Too Low) | < 2 | t |
Invalid (Too High) | > 24 | testusernaaaaaaaaaaaaaame |
Boundary Value Analysis
is a software testing technique that focuses on the edges or boundaries of input domains where errors are most likely to occur. Instead of testing every possible input, Boundary Value Analysis
targets the values just inside, on, and just outside the boundaries - because that's where systems often break.
Why it's useful:
- Reveals off-by-one errors.
- Exposes incorrect assumptions about inclusive/exclusive ranges.
For the above rule, after Equivalence Partitioning
was applied, we identified three logical groups; the boundaries of each group are:
Boundaries | Value |
---|---|
Just outside valid minimum | t |
Valid minimum | te |
Just inside valid minimum | tes |
Just inside valid maximum | testusernaaaaaaaaaaaame |
Valid maximum | testusernaaaaaaaaaaaaame |
Just outside valid maximum | testusernaaaaaaaaaaaaaame |
Combining Equivalence Partitioning
with Boundary Value Analysis
has resulted in us identifying our first 6 test cases. Well done π!
For more details on
Equivalence Partitioning
andBoundary Value Analysis
, read this post.
B. Usernames must start with a letter
This rule is to control the first character in a username.
As this rule is about the acceptable and unacceptable groups of input, we can again employ Equivalence Partitioning
. Boundary Value Analysis
isn't so useful on this one, so we will skip it.
Group | Input | Representative Value |
---|---|---|
Valid | Beginning with letter | testusername |
Invalid | Beginning with number | 1testusername |
We can add two more test cases identified to our total - 8.
You might be thinking that there is some overlap between the tests that we've identified, and you'd be right. With some of the logical groups, we are providing the same input. This overlap isn't a problem at the identification stage - it's better to capture everything first, then consolidate intelligently. We'll address this after we've examined the complete picture.
C. Usernames must only contain alphanumeric characters
This rule is to control the acceptable characters that can be present in a valid username.
There is a little ambiguity in this rule about whether a username will always need to be a mix of letters and numbers or if it can consist of only letters. Ambiguity is the nemesis of good testing. So where you see ambiguity, be brave and call it out. The specification should be clear about the intended behaviour. In this case, it was confirmed that a username can contain a mixture of both letters and numbers, or only letters.
As this rule is about the acceptable and unacceptable groups of input, we can again employ Equivalence Partitioning
. Again, Boundary Value Analysis
isn't so useful on this one, so we will skip it.
For the above rule, we can identify five logical groups:
Group | Input | Representative Value |
---|---|---|
Valid | Only letters | testusername |
Valid | Mixed alphanumeric | testusername1234 |
Invalid | Only numbers | 123456 |
Invalid | Contains non-alphanumerical | testusername@1234 |
We can add four more test cases identified to our total - 12.
The more eagle-eyed among you will have been shouting at your screen that the
Only numbers
group overlaps with theBeginning with number
group that was identified inB. Usernames must start with a letter
. You are right, they do overlap. However, at this stage, we want to include both test cases in our identified list. When we come to write the unit tests, we will need to decide how to handle this overlap - merge the tests into one or write identical tests with different names? There is no need for you to do anything at the moment, as these test cases are all valid.
D. Usernames are case-insensitive
This rule is that we should treat usernames the same if they contain the same alphanumeric values in the same order, regardless of case.
Up until now, our rules have been applied to the username directly; however, this rule implies the username will be passed to something else. As we are not writing the code yet, we don't know what that something else might be. And that's fine, we don't need to know that detail, so park what something else might be. Instead, focus on what's expected to be passed to that something - a username that is case-insensitive. In order to achieve a case-insensitive username, we can convert all usernames to lowercase and pass that lowercased username to something else.
We don't really need to use any identification techniques here, as the logic is pretty simple, e.g. testusername
and TestUsername
should be treated as the same username. So we need to add a test case for an input of TestUsername
, resulting in testusername
being the username output.
We can add one more test case identified to our total - 13.
E. Usernames must be unique across all registered usernames
This rule is that the same username can be associated with multiple users.
Similar to D. Usernames are case-insensitive
. This rule implies the username will be passed to something else. And again, like before, we will choose to ignore what that something else might be.
As this rule is about the acceptable and unacceptable groups of input, we can again employ Equivalence Partitioning
. Again, Boundary Value Analysis
isn't so useful on this one, so we will skip it.
Group | Input | Representative Value |
---|---|---|
Valid | Unknown username | testusername |
Invalid | Known username | testusername |
We can add two more test cases identified to our total - 15.
Now that we've worked our way through those rules, we have identified a total of 15 test cases. Notice how we haven't written any code yet. These unit tests came purely from reading the specification and thinking about the way our input data would fall into valid and invalid logical groupings.
With the specification exhausted, we now shift our attention inwards. Let's see what the method signature can reveal.
2. Method signatures reveal hidden requirements
A method's signature is more than just a name and some types - it's a contract. And like any contract, it carries hidden obligations that specs often gloss over. By scanning the signature, you can uncover whole categories of test cases:
- Parameter constraints - Can this parameter be
nil
? What happens if it is? - Return behaviour - Does the method complete immediately, or does it hand work off asynchronously?
- Error propagation - How are failures communicated: via
NSError
, thrown exceptions, or silent returns?
Let's examine what we have:
@interface WBUsernameValidator : NSObject
- (void)isValidUsername:(NSString *)username
usernameCheckerService:(id)usernameCheckerService
completionHandler:(void (^)(BOOL success, NSError * _Nullable error))completionHandler;
@end
Good news! We now know what something else is - a type that conforms to the
WBUsernameCheckerService
protocol.
From this method signature, we can identify one additional test case: the username
can be passed in as nil - 16.
As well as that additional test case, we now know that all valid inputs should result in success
being YES
and all invalid inputs should result in success
being NO
and error
being set to a non-nil value.
Now that we have examined the method signature, it's time to look at the implementation itself.
3. Implementation details matter for completeness
@implementation WBUsernameValidator
- (void)isValidUsername:(NSString *)username
usernameCheckerService:(id)usernameCheckerService
completionHandler:(void (^)(BOOL success, NSError * _Nullable error))completionHandler
{
// Normalise case: Rule D: case-insensitive
NSString *normalised = [username lowercaseString];
// Rule A: Length between 2 and 24
if (username.length < 2 || username.length > 24) {
NSError *error = [NSError errorWithDomain:@"com.williamboles.validation"
code:100
userInfo:@{NSLocalizedDescriptionKey:
@"Username must be between 2 and 24 characters in length"}];
completionHandler(NO, error);
return;
}
// Rule C: Only alphanumeric characters
NSCharacterSet *nonAlphanumeric = [[NSCharacterSet alphanumericCharacterSet] invertedSet];
if ([normalised rangeOfCharacterFromSet:nonAlphanumeric].location != NSNotFound)
{
NSError *error = [NSError errorWithDomain:@"com.williamboles.validation"
code:101
userInfo:@{NSLocalizedDescriptionKey:
@"Username must contain only letters and numbers"}];
completionHandler(NO, error);
return;
}
// Rule B: Must start with a letter
unichar firstChar = [normalised characterAtIndex:0];
if (![[NSCharacterSet letterCharacterSet] characterIsMember:firstChar])
{
NSError *error = [NSError errorWithDomain:@"com.williamboles.validation"
code:102
userInfo:@{NSLocalizedDescriptionKey:
@"Username must start with a letter"}];
completionHandler(NO, error);
return;
}
// Rule E: Must be unique (delegate to service)
[[DefaultWBUsernameCheckerService sharedInstance] isUniqueUsername:normalised
completionHandler:^(BOOL success, NSError * _Nullable error) {
if (success)
{
completionHandler(YES, nil);
}
else
{
// Pass through uniqueness failure or service error
NSError *finalError = error ?: [NSError errorWithDomain:@"com.williamboles.validation"
code:103
userInfo:@{NSLocalizedDescriptionKey:
@"Username is already taken"}];
completionHandler(NO, finalError);
}
}];
}
@end
Looking through isValidUsername:usernameCheckerService:completionHandler:
, you might at first think that the implementation doesn't have any hidden test cases. But look again because there is one hiding in there π. Hidden implementation test cases are usually found where there are interactions with collaborators.
Looking at where we call usernameCheckerService
, we can see that it can return an error. When it does that, that error is passed back rather than the 103 - Username is already taken
error that would usually be returned for a isUniqueUsername:completionHandler:
failure. This behaviour needs a test case - 17.
Don't confuse identifying test cases with testing implementation details. The goal is to spot new inputs/branches, not to lock tests to private logic.
With these dependencies now controllable and state explicit, we are ready to write some unit tests.
4. Consolidation prevents test bloat
Having gone through the specification, method signature and method body, we have identified 17 test cases; however, not all those test cases will become unit tests. Some of the test cases we have identified overlap with other ones - where this occurs, we will merge those test cases into one.
When consolidating, we should look for:
- Same logical group input.
- Same outcome.
- Same execution path.
If those three criteria are a match, we have a candidate for consolidation. I keep the test that is stronger from a descriptive viewpoint with regard to informing any future developers what this functionality does.
Test Case | Keep | Notes |
---|---|---|
|
β | Unique error. |
|
β | First happy-path outcome. |
|
β | Unique input. |
|
β | Unique input. |
|
β | Unique input. |
|
β | Unique error. |
|
β | Consolidate with test case 2. |
|
β | Unique error. |
|
β | Consolidate with test case 2. |
|
β | Input data is unique. |
|
β | Consolidate with test case 8. |
|
β | Unique error. |
|
β | Too important a detail to be hidden in input data. |
|
β | Consolidate with test case 2. |
|
β | Unique error. |
|
β | Unique input. |
|
β | Unique error. |
There is, of course, some subjectivity in the above list, for example, why have
2. Username is on the minimum boundary
share the same input with9. Username contains only letters
rather than10. Username is a mix of letters and numbers
? I feel that keeping2. Username is on the minimum boundary
and10. Username is a mix of letters and numbers
distinct will inform any future developers more about the validation functionality than keeping9. Username contains only letters
around.
After consolidation, we have whittled our test cases down from 17 identified test cases to 13 actionable test cases. It's time to write some test code.
5. Writing unit tests for what's left
First up, we need to write a test double of WBUsernameCheckerService
that allows us to write our unit tests. Test doubles come in many shapes and sizes. We are going to write a Stub
/Spy
hybrid that will allow us to provide canned responses to any calls to isUniqueUsername:completionHandler:
while also knowing that that call was made:
@interface WBStubWBUsernameCheckerService : NSObject
@property (nonatomic, assign) BOOL stubbedSuccess;
@property (nonatomic, strong, nullable) NSError *stubbedError;
// Tracking
@property (nonatomic, assign, readonly) BOOL wasCalled;
@property (nonatomic, copy, readonly, nullable) NSString *lastUsername;
@end
@interface WBStubWBUsernameCheckerService ()
@property (nonatomic, assign, readwrite) BOOL wasCalled;
@property (nonatomic, copy, readwrite, nullable) NSString *lastUsername;
@end
@implementation WBStubWBUsernameCheckerService
- (void)isUniqueUsername:(NSString *)username
completionHandler:(void (^)(BOOL success, NSError * _Nullable error))completionHandler
{
self.wasCalled = YES;
self.lastUsername = username;
if (completionHandler)
{
completionHandler(self.stubbedSuccess, self.stubbedError);
}
}
@end
In the above test double, we can:
- Set the response for any calls to
isUniqueUsername:completionHandler:
via thestubbedSuccess
andstubbedError
properties. - Check if
isUniqueUsername:completionHandler:
via thewasCalled
property. - Check what the
username
value was that was passed intoisUniqueUsername:completionHandler:
via thelastUsername
property. This property will be used for checking that the username is normalised.
Yes, we're writing plumbing here, but trust me, it pays off when your unit tests stop flaking.
Speaking of which, it's time to write the first test. We'll start with the tests lifecycle and shared properties:
@interface WBUsernameValidatorTests : XCTestCase
// 1
@property (nonatomic, strong) WBUsernameValidator *sut;
@property (nonatomic, strong) WBStubWBUsernameCheckerService *stubService;
@end
@implementation WBUsernameValidatorTests
#pragma mark - Lifecycle
- (void)setUp
{
[super setUp];
// 2
self.sut = [WBUsernameValidator new];
self.stubService = [WBStubWBUsernameCheckerService new];
}
- (void)tearDown
{
// 3
self.sut = nil;
self.stubService = nil;
[super tearDown];
}
@end
Let's break down what the above code is doing:
- Two properties to hold our common instances between each test.
- Setting up those common instances to simplify the tests.
- Destroying the common instances between each test execution.
When writing unit tests, I normally split my tests into two sections:
- Happy-paths - where we end up with a positive result, i.e. username is valid.
- Unhappy-paths - where we end up with a negative result, i.e. username is not valid.
12. Usernames are case insensitive
can be considered happy or unhappy - it's up to you!
Happy-path tests
Let's get on with writing some unit tests, starting with the happy-path tests:
@interface WBUsernameValidatorTests : XCTestCase
// Omitted other code
#pragma mark - Tests
- (void)test_givenAValidUsernameWithMinimumLength_whenValidatorIsCalled_thenSuccessfulResponseIsReturned
{
// 1
self.stubService.stubbedSuccess = YES;
self.stubService.stubbedError = nil;
// 1
XCTestExpectation *expectation = [self expectationWithDescription:@"Completion called"];
// 3
[self.sut isValidUsername:@"te"
usernameCheckerService:self.stubService
completionHandler:^(BOOL success, NSError *error) {
// 4
XCTAssertTrue(success, @"Validation should succeed for a valid, unique username");
XCTAssertNil(error, @"No error should be returned for valid username");
XCTAssertTrue(self.stubService.wasCalled, @"Service should be invoked for valid input");
XCTAssertEqualObjects(self.stubService.lastUsername, @"te", @"Username should have been normalised");
[expectation fulfill];
}];
[self waitForExpectationsWithTimeout:1
handler:nil];
}
@end
Let's break down what the above test is doing:
- We set up our test double with the responses needed for this scenario.
- As
isValidUsername:usernameCheckerService:completionHandler
is an asynchronous method, we need to use expectations to ensure that we wait for thecompletionHandler
block to be called before exiting this test. - Trigger the validation check.
- Assert that the outcome of the validation check matches what we expected.
Now that we have seen the above, the rest of the happy-path unit tests follow a similar pattern.
The rest of the happy-path unit tests follow the same pattern; I won't include the other tests here for the sake of post length. I'm sure you can imagine the variations.
Unhappy-path tests
With our happy-path tests written, let's move on to the unhappy-path tests:
@interface WBUsernameValidatorTests : XCTestCase
// Omitted other code
- (void)test_givenAUsernameWithLengthLessThanTheMinimumLength_whenValidatorIsCalled_thenFailureResponseIsReturned {
// 1
XCTestExpectation *expectation = [self expectationWithDescription:@"Completion called"];
[self.sut isValidUsername:@"a"
usernameCheckerService:self.stubService
completionHandler:^(BOOL success, NSError *error) {
XCTAssertFalse(success, @"Validation should failure for a username that is too short");
// 2
XCTAssertEqual(error.code, 100, @"Too short error should have been returned");
XCTAssertFalse(self.stubService.wasCalled, @"Validation should not reach using the service");
[expectation fulfill];
}];
[self waitForExpectationsWithTimeout:1
handler:nil];
}
@end
The above method is similar to the happy-paths, but with a few important differences:
- As this test shouldn't hit the
WBStubWBUsernameCheckerService
instance, there is no need to set it up with a canned response. - The error code is being checked to assert the right validation rule failure caused this response.
Now that we have seen the above, the rest of the unhappy-path unit tests follow a similar pattern.
The rest of the unhappy-path unit tests follow the same pattern; I won't include the other tests here for the sake of post length. I'm sure you can imagine the variations.
That's all our unit tests written, let's see an overview of what we've achieved.
For a more in-depth look at how to write unit tests, read
How Unit Tests Protect Creativity and Speeds Up Development
.
Our systematic approach
Congratulations on making it this far π₯³ - this post was on the rather large side of things.
Through our systematic approach to identifying test cases for username validation, we've demonstrated several key principles that apply to any testing scenario:
- Read the specification - The specification is your most valuable source of test cases because it represents what the code should do, not what it currently does. This outside-in approach prevents you from writing tests that validate existing bugs or implementation quirks.
- Method signatures reveal hidden requirements - Don't overlook what the method signature tells you. Nullable parameters, return types, and async patterns all suggest additional test scenarios that might not be obvious from the specification alone.
- Implementation details matter for completeness - While you shouldn't test implementation details, examining the code helps you spot hidden dependencies, edge cases, and error paths that need coverage. The goal is to identify what needs testing, not to couple your tests to specific implementations.
- Consolidation prevents test bloat - Raw test case identification often produces overlapping scenarios. Smart consolidation creates a maintainable test suite that still provides comprehensive coverage.
- Writing unit tests for what's left - Turn your test cases into unit tests. All that hard work is paying off with full coverage.
By following this outside-in approach, you move from ad-hoc test writing to systematic test design. The result is more comprehensive coverage, fewer missed edge cases, and tests that remain valuable as your code evolves.