The best thing tests give us is "feedback". Feedback as to if our designs are good, if there are bugs, and if we're making progress. If part of what it means to be a good developer is to write testable code, the path of least resistance is to use a test-first approach. In this series about Test-Driven Development, we seek to master the art of testing, regardless of which side of the stack you're on (front & back).
Over the past year, I've been paying more attention to how traditional tradespeople work. I've been looking for parallels between how they work and how we work as software developers.
When I most recently took my car in to get an oil change, I noticed that the technicians seemed to consistently, reliably, and confidently get the job done, and get it done right. One client after the next. They weren't making messes. They weren't missing their estimates. They weren't discovering that things won't work at the last minute - in fact, I've even seen them come out, 2 minutes into the job to let someone know that there's more work to be done than they initially expected.
It appears, at least to me, as if these are principled workers, and — forgive the pun, but it seems like they're not reinventing the wheel.
Part of what it means to be an Agile software craftsperson is to consistently deliver value to the customer. That's hard. But that's what we're constantly trying to do here.
The goal of this blog is to discover the best techniques for writing testable, flexible, maintainable code, and to teach others how to do it too. Today, the software quality attribute we're most interested in is testability.
And in my experience:
The best way to write testable code 2 is to write the test first
Introduction
This is the first post in a series on Test-Driven Development (TDD): a test-first technique for developing software 🧪.
In this introductory post, you'll build a beginner foundation for TDD. We'll learn about what TDD is, what makes it important, and how developers are using it to consistently deliver value on real-life projects. We'll also discuss what makes testing so challenging to get right, and in the end, we'll wrap up with a demonstration of the Classic TDD process, using TDD to construct a palindrome checker.
You'll learn everything you need to know to get started practicing the TDD Red-Green-Refactor process. This is just the starting point. We need to get this foundation down first before we can learn how to apply advanced TDD techniques in the real-world 3.
Why tests?
There are lots of reasons why tests are helpful, but the two best reasons, in my opinion, are confidence and feedback.
They say that change is a constant in software development.
To find the confidence to safely add, remove, or refactor code without the fear of introducing bugs and regressions, we need tests.
This is especially important later on in a project when the codebase is much larger than it was at the beginning, and there's way more code that one human being can mentally account for anymore.
IMAGE
I can tell you from experience that this is not a fun situation to be in— thousands of lines of code into a project with no tests and no safety. It's a great way to turn a promising codebase into an unstable mess.
In my opinion, the most important reason for tests is feedback.
Tests give us feedback to let us know:
- When regressions have been introduced
- What our progress towards implementing a feature looks like
- That we actually understand the customer requirements
- And if our designs are feasible
- Depending on how you think about it, confidence may actually come from the feedback we get from tests.
[Principle]: Listen to your tests — Tests that are hard to write typically signal a deficiency in design. Use the immediate feedback you get from feeling out how hard it is to write a particular test to reconsider the design.
Feedback is so important that Kent Beck lists it as one of the primary values of Extreme Programming: the influential Agile software development methodology.
More reasons to write tests
- Measure progress
- Sculpt out public APIs
- Understand requirements
- Keep a feature in scope
- Documentation for other developers
Why testing is hard I didn't start writing tests until later in my career. When I first started out, I knew that we should probably have them, but as for writing them? Yeah, right. Like many new developers, I was lost as to how to even get started. After surveying the landscape, here's why I think testing is hard.
Testing isn't regularly taught in schools This may not be the case for some readers, but the majority of my college/university/bootcamp-going peers weren't exposed to how to properly test code until a mentor sat down and showed them how to do it.
As an industry, we struggle to agree on testing terminology Ask two developers what they believe an integration test is. As Fowler writes, there are two completely different notions of what this means, and we still haven't converged on a standard.
This is true for much of the other test types as well (unit, acceptance, E2E, contract, etc).
It also appears that your organization style and role (front-end, back-end, full-stack) play a role in how we see certain types of tests as well.
Understanding what to test and how to test it takes practice Knowing what to test is hard. And this is especially misleading for newer developers since a lot of libraries and frameworks tell you how to test code within their library or framework, but don't actually give you best practices for testing your code within them.
The magic rule here is to test against behavior. And if you understand the requirements, we can pretty much turn those user stories or customer requirements directly into tests.
[Principle]: Prefer tests against behavior, not implementation — Seek to test "behavior" using the language of the domain to write the tests. There will be times when the tests you need to write are actually against more technical concepts (ie: integration tests), but you can always write your tests to test behavior, not implementation.
As for knowing how to test behavior— that's a different story.
Most of the time, the code we write is more complex than plain ol' vanilla JavaScript or TypeScript. Typically, we're writing code that relies on dependencies like web servers, caches, databases, and even front-end library code like React or Vue.js 3.
It takes some up-front planning and foresight to figure out how we're going to test our code in most of these scenarios.
We leave testing until the very end I also think that testing is hard because developers often write tests after the production code has been written. While this approach sometimes works, I don't think it's the best way to go about things.
Writing all the tests at the end isn't really acting like we value feedback, because we're leaving all the uncertainty of "if we can even test this thing", "if this thing was designed well", and "if this thing even works" to the very end.
Testing is a part of architecture This has been a massive realization for me. And I hope it will be for you as well. Testing is a part of architecture.
Ralph Johnson, co-author of the famous design patterns book said this of architecture:
"... it is the decisions you wish you could get right early in a project”
Architecture is about the expensive, hard to change stuff like choosing a tech stack (React, Apollo, GraphQL, Mongo), an architectural style (Reactive, Event-Driven, Transaction Script), or in this case — our testing approach.
The trouble with not thinking about how we're going to test early on is that we're leaving a lot of room for uncertainty down the road towards the end of the project.
We're not sure if there are edge cases we're missing, if there are structural problems with the way we've written our code, and if we're even going to be able to test the thing.
Stress level with tdd
Your stress level over time when you realize code is hard to deploy and test towards the end of the project.
[Principle]: Expose uncertainty early - Decide on how you're going to test and deploy your application at the start of the project. Get your test architecture set up in Sprint 0.
As Agile software developers, we should value feedback, exposing bad design and uncertainty as early as possible.
"I'm not a great programmer, I'm just a good programmer with great habits" — Kent Beck, the creator of Extreme Programming
At this point, I hope you're sold on what tests can do for us and why we'd want them.
Now let's talk about the TDD process.
The Test-Driven Development (TDD) Process TDD (test-driven development), is a technique — or a process for developing software. The goal is to keep code quality high and keep you productive, even as projects grow to be really large and complex.
Red-Green-Refactor The TDD process works by following the Red-Green-Refactor loop. It goes:
Red — Write a failing test Green — Write just enough code that will pass the failing test Refactor — Criticize the design and refactor the code, keeping the tests intact Red-Green-Refactor TDD
We should like this process because it keeps tight feedback loops. It gives us the ability to produce cleaner, simpler designs and helps us introduce abstractions only when they are absolutely necessary (see YAGNI — You Aren't Gonna Need It).
Should I always follow the TDD loop?: I know what you're thinking. Khalil, you can't expect me to completely follow this rule. You probably don't even do this. You're right. I don't always follow it. Rules are a great way to get started, but sometimes I break them. When I'm driving, I'll sometimes perform a rolling stop instead of coming to a complete stop. When I'm crossing the street, sometimes I'll take a quick gander to my left and right before jaywalking. I'm of the mindset that if you master this technique, you'll have the skill to decide when to break it, and to do so with confidence.
Types of tests (unit, integration, E2E, acceptance) There are different types of tests, and you can apply the Red-Green-Refactor process to each of them. As we mentioned earlier, the scopes of these tests are up for interpretation, and they may be slightly different if you're a front-end or back-end developer, but in general, they are:
Unit ⭐ — Test an individual, isolated component Integration ⭐ ⭐ ⭐ ⭐ — Test that multiple units work together OR "tests that confirm our code works against code we don't own" (like external APIs, databases, caches, etc) End-to-End ⭐ ⭐ — Tests that act as a user actually using the application; tests the entire stack from top-to-bottom Acceptance ⭐ ⭐ ⭐ ⭐ ⭐ — Domain-driven tests that verify a user story (also comparable to a use case, customer test, command/query, feature, or vertical slice) works as expected. The big picture: How TDD works on real-life projects Allow me to give you the big-picture so you can see where we're going with TDD and why I find it so incredibly powerful.
In the real-world, a good testing architecture typically involves more than one type of test.
tdd-in-extreme-programming
In Extreme Programming, tests are a mandatory part of planning and feedback loops.
As written about in the influential "Extreme Programming Explained" and "Growing Object-Oriented Software Guided By Tests" books, the test we start with is the acceptance test: that is — the test that most closely represents the feature we want to build. From here, we build out the internals of the feature using other tests including unit tests.
Starting with the acceptance test Starting with the acceptance test, we convert the user story into a behavioral test written using the exact same language from the domain. This means our tests should read like plain English, and represents the exact user story that we're about to build.
Acceptance tests read like domain-driven customer requirements. These are the user stories, commands/queries, use cases, and so on.
From here, we identify the objects and their public APIs (properties, methods, etc) for the feature we need to realize while maintaining a single layer of abstraction. When we need to, we step a layer deeper into the objects we come up with and write more specific tests.
This technique is called Double Loop TDD and it's how we sculpt our solution to fit our tests.
Double Loop TDD The idea of Double Loop TDD is to maintain two TDD loops.
The outer loop:
is the acceptance test loop and it catches regressions The inner loop:
is the unit test loop and it measures progress towards implementing a feature When we're writing tests for the outside acceptance test loop, we're usually coding from the outside-in. When we're writing tests for the inner unit test loop, we're coding inside-out.
Inside-Out and Outside-In. These are also names for two schools of thought for TDD.