Test-Driven Development as a Best Practice

In Blogfest-en by Baufest

Some best practices, like keeping your shoes on during meetings, are well known. Other best practices, like test-driven development (TDD), require a different mindset to implement and maintain correctly. Before diving deeper into TDD, however, let’s first start with some basic practices.

Wednesday 6 - April - 2022
Baufest

A test compares the expected result with the one obtained when running a certain functionality in the software code. If the expected result matches the one obtained, the test passes. Otherwise, it fails. Since writing code is what developers love doing, code is what is used by developers to write software tests. These software tests can then be run in an automated manner, making them a great asset in any CI/CD pipeline.

Unit Test Everything

A unit test is a test that is performed in isolation by abstracting the code from its dependencies with other components. They allow you to test a single concern or logic concept in the system, whether that is business logic or maybe tackling cross-cutting concerns such as data validation. Smaller tests which focus on individual code units have a huge advantage: a failed test makes it immediately obvious where the problem lies.

A common way of isolating code units from its dependencies is by using test doubles. Mocks, fake objects, dummy objects, stubs and spies would all fall into that category. While the difference between those is beyond the scope of this article, test doubles basically replace the real object with a simulated object for testing purposes. Usually, those test doubles can be easily configured to give a desired, predictable output, and sometimes they can even run some assertions on the object’s behavior.

Since code is run in isolation, unit tests are typically fast to run, making your test suites convenient to run frequently, allowing you to detect issues in your code much earlier.

Everything works on its own. Does it all work together?

Unit testing determines whether each module works correctly on its own. Integration testing is when you connect them and test whether your separately developed modules (often created by different teams) work together correctly.

The purpose of this level of testing is to find defects in how these modules interact when they work together. Even though each module has been tested in isolated stages, defects may expose themselves for various reasons when they come together.

Integration tests obviously have less isolation than unit tests and are as such, slower to execute.

Tests that require your complete application running, rather than just the code responsible for the interaction between components, are usually referred to as end-to-end tests or system tests. These tests are obviously the slowest to run, usually requiring a substantial test environment that is as close as possible to the production one. User interface automated tests, in which a script is written to execute a series of actions over the application UI, are part of these system tests. 

Automated Testing to Help You Sleep at Night

So now you have lots of unit tests for all your code units, many integration tests for your components working together and end-to-end tests on all your critical flows, and all your test case suites are run automatically. Great, so your software has no bugs, right?

Unfortunately, not. No complex software is free of bugs. But you can safely assume that one with no tests, has plenty of them.

A solid test case suite will reduce the number of bugs that make it to production, while allowing you to detect them much earlier. And bugs detected during development are obviously much, much cheaper than the ones detected in a live production application.

While you are investing development time into writing your test cases, you end up reducing the overall time you spend developing, troubleshooting, bug-fixing and maintaining your application during a project’s lifecycle. And you end up with a better-quality product.

TDD for You and Me

So, where does TDD come in? TDD was first introduced by Kent Beck, author of the seminal book “Test-Driven Development: By Example.” Using TDD on a daily basis requires a shift in perspective. Basically, the idea behind TDD is that you first write the test for the functionality you want to add. If you run the test, it would fail, as you haven’t written the functional code yet. Then, you write as little code as needed to make the test pass. And then, having the test as a guarantee, you can refactor or improve that code. If refactoring went well, tests should still pass after the changes. These steps are often referred to as the red-green-refactor cycle.

TDD allows for safe refactoring. Developers can (and should!) improve their code with confidence since their test case suite gives them a safety net to do so while having the tests to make sure they did not break anything in the process.

Is TDD a must? you may ask. Not really, you will still have all the benefits of a strong test suite by writing the tests after the functional code, provided you always do. But TDD brings two key advantages. It forces developers to think about interfaces or “contracts” between objects, how components will speak to each other, before thinking about the implementation.  It also makes sure you will have tests for all your functional code, since you should only write code to make tests pass. Tests can no longer be an afterthought or something you drop as soon as the release deadline tightens.

TDD has become popular because it makes code easier to maintain. Developers can change working code without fearing the risk of creating new bugs every time they add new features or fix a bug. Test-driven development includes debugging by design, and hence, a major benefit is less time spent fixing problems. As more time is spent working on new features, project costs decrease, and return on investment goes up.