ASP.NET Security Consultant

I'm an author, speaker, and generally a the-way-we've-always-done-it-sucks security guy who specializes in web technologies and ASP.NET.

I'm a bit of a health nut too, having lost 50 pounds in 2019 and 2020. Check out my blog for weight loss tips!

How just about everyone gets unit testing wrong

Published on: 2015-05-10

One of the biggest ways that people could leverage technologies more effectively is to use unit testing correctly. Most teams either don't utilize unit testing at all or use it far too much; it's tough to find that "sweet spot" where the tests increase quality without hindering productivity. But if you're able to achieve that balance, you should be able to enjoy higher quality software with a lower cost of creation.

What is unit testing?

Before I go too much further, I feel like I should explain what "unit testing" actually is, because the term is misused quite frequently. Unit testing is the act of testing a small component, or unit, of your software application. Because the scope of each individual unit test is so limited, the only way to achieve it is to write code that tests your code, usually using a framework like NUnit or the Microsoft Testing Framework. A detailed description of how it works is out of the scope of today's post, but in a nutshell, unit testing is when a developer writes a test method that calls "real" code and lets him or her know when the actual results don't match the expected results.

Confusingly, many developers who are unfamiliar with these testing frameworks refer to the manual testing they do as "unit testing." That isn't "unit testing"; that's just "testing".

Why in the world would I write code to test code?

To someone who isn't a software developer, the idea of writing code to test code may seem rather silly. But for those of us who actually do it, the benefits are easy to see:

  1. During a typical test of a system, you have to log in and perform a specific set of actions in order to test particular functionality. This is incredibly inefficient and time consuming. Unit testing allows the developer to perform specific, targeted testing on the area in question.
  2. When something does go wrong, the development team doesn't need to look in the entire system for the source of the bug. They can run all of the previously-created unit tests and narrow down their search.
  3. Finally, as I mentioned last week, rewriting/refactoring code periodically is vitally important for the long-term health of your system. Rerunning all of the unit tests is a great way to help ensure that you didn't break anything in the rewrite.

When unit testing can be taken too far

Most of my experience with software developers is that they tend to think of things in terms of right or wrong. If it's right to write unit tests, then you must write unit tests for everything you do, right? Here are two unit testing beliefs that can cause your project more harm than good.

Test Driven Development (TDD)

The idea behind Test Driven Development is that you write your unit test before you write your product code. You then write product code to make the test pass. If you need to add or change the functionality, you change the tests first and continue making fixes until all of your tests pass. This is a nice idea, but a good chunk of the typical developer's code just doesn't need to be unit tested. Complex business logic absolutely needs to have corresponding unit tests. But writing unit tests for simple logic will require the developer to spend more time writing tests than delivering value to the business.

100% Code Coverage

One common metric that software teams track is code coverage, i.e. what percentage of the code written for the product is tested by a unit test. Many software development managers believe that 100% code coverage is necessary to ensure that the code is tested adequately. Code that is very highly tested is very tough to change. If unit tests are used excessively, software teams will find themselves considering the costs of changing the existing unit tests when changing the code, and these costs can spiral out of control.

So what is the right balance?

Unfortunately there are no hard-and-fast rules to know what unit tests should be written, but here are some guidelines that I follow.

Consider writing unit tests when:

  • When the logic behind the method is complex enough that you feel you need to test extensively to verify that it works.
  • When a particular code function breaks and it takes longer than a minute or so to fix it.
  • Whenever it takes less time to write a unit test to verify that code works than to start up the system, log in, recreate your scenario, etc.

Consider avoiding unit tests when:

  • When elaborate frameworks need to be created or installed (such as mock objects and dependency injection) just to get the tests to work.
  • When the tests are applied to code that, if broken, has very little bearing whatsoever on the overall software quality.
  • When the costs of maintaining the set of tests are higher than the costs of maintaining the actual product code.

To summarize, unit tests are intended to help development teams reduce costs by reducing testing time, reducing the need for regression tests, and making much-needed maintenance easier. Writing unit tests is absolutely the right thing to do if you want your software project to be a success. However, development teams that find themselves maintaining large libraries of tests are actually causing many of the problems that unit testing was meant to solve.

This article was originally posted here and may have been edited for clarity.