Sunday, 29 April 2018

# Unit test coverage and why it matters

Good unit test coverage can help you to improve your code in ways you might not expect. I'm not talking about just chasing some mythical value, like the agile team which ascribes to the contract of "85% coverage over the project", though chasing a number (the theoretical 100% coverage, which I've only achieved in one project ever) can lead you to some interesting discoveries.

Obviously, we can have bogus coverage:

```csharp
using NUnit.Framework;
using static NExpect.Expectations;
using NExpect;

[TestFixture]
public class TestTheThing
{
  [Test]
  public void ShouldDoTheStuff()
  {
    // Arrange
    var sut = new TheThing();
    // Act
    var result = sut.DidTheStuff();
    // Assert
    Expect(result).To.Be.True();
  }
}

public class TheThing
{
  public bool DidTheStuff()
  {
    try
    {
      WriteOutSomeFile();
      MakeSomeWebCall();
      return true;
    }
    catch (WebException ex)
    {
      return false;
    }
  }
  // ... let's imagine that WriteOutSomeFile 
  // and MakeSomeWebCall are defined below...
}
```

The above test doesn't actually check that the correct file was written out -- or even that the correct web request happened. The one test above technically provides full coverage of the class (if WriteOutSomeFile and MakeSomeWebCall have no branches), but it's a bit anemic in that the coverage doesn't tell us much.

So coverage is not a number which definitively tells you that your code (or tests) are good. However, examining coverage reports (particularly the line-by-line analysis) has helped me to discover at least three classes of error:

## I found bugs I didn't know I had

When I was still at [Chillisoft](http://www.chillisoft.co.za), I'd just finished a unit of code (TDD, of course) to my satisfaction and decided to run coverage on it for interest sake. I was convinced that I'd done a good job, writing one test before each line of production code which was required. To my dismay, I found that there was one line which _wasn't_ covered. _Shame on me_, I thought, and went back to the test fixture, _where I found a test describing the exact situation that line should be handling_. Ok, so I have this test, it names the situation, but there's no coverage on the line?

_Remember: coverage reports _can_ be faulty. It's rare, but it's worthwhile re-running your reports to just make double-sure that what you're seeing is correct._

I re-ran my coverage, but that one line remained red. And the more I looked at it, the more it looked like it should actually be causing a test to fail. So I re-examine the test which is supposed to be covering it to find... I've made a mistake in that test and it's actually not running through the branch with the uncovered line. 

The fault here most likely comes down to one faulty TDD cycle where I hadn't gotten a good "red" before my "green". Still, examining the coverage report made me find the error and fix it before the code got anywhere near production. This experience is why I advocate for running coverage after reaching a point where one expects the current unit of work to be complete -- to find any holes in testing or defects in logic hidden in those holes. It's why I (convincingly) argued for all Chillisoft programmers to be granted an Ultimate license of Resharper, which has dotCover built right in to the test runner. We have coverage reports running at the CI server, but I wanted every developer to have the ability to test coverage quickly so that they could also discover flaws in their code before that code gets to production -- or even another developer's machine!

## I found dead code

Just recently, I finally got the `gulp` build tasks for `NExpect` to include coverage by default when running `npm test`. And to my dismay, `NExpect` only had about 78% coverage. Which I thought was odd, because `NExpect` was build very-much test-first: indeed, the general method of operation was to write out a test with the desired expression and then provide the code to make that expression happen. So, for example:

```csharp
Expect(someCollection).To.Contain
  .Exactly(1).Deep.Equal.To(search);
```

would have started out with most of those words red (thanks to Resharper) and they would unhighlight as I got together the class/interface linkage to make the words flow as desired. I expected coverage to be closer to 90% (I did expect some places to have been missed in lieu of ever having scrutinised coverage reports for `NExpect` before), but 78%? I had some work to do.

In addition to finding a few minor bugs that I didn't know I had (particularly with wording of failure messages in a few cases), I found that I had bits of code which theoretically should have been under test, but which weren't covered. Especially stuff like:

```csharp
[Test]
public void ComparingFloats()
{
  Expect(1.5f).To.Be.Greater.Than(1.4f)
    .And.Less.Than(1.6f);
}
```

which works as expected, but never hit the `Than` extension methods for continuations of float.

The answer became obvious upon hovering over the usages -- each `Than` was expecting to operate on values of type `double` as the _subject (actual) value_ (ie, the value being tested, in this case, `1.5f`). This is because `Expect` upcasts certain numeric types (floats to doubles, ints to longs, etc) so that the comparison code doesn't require casting from the consumer (since `NExpect` continuations hold the type of the subject all the way through, instead of downcasting to `object` and hoping for the user to provide reasonable values. There's nothing wrong (that I can tell) with this approach -- and it works well, but it _does_ mean that the `Than` extension methods expecting to operate on `float` and `int` subjects will never be used. They were dead code! So I could safely remove them. One of the ways to make code better is to remove the unnecessary bits (:

## I found holes in my api

This is again, working in `NExpect`, where, upon providing coverage for one variant of syntax, I would find that I hadn't implemented for another. For example, `NExpect` has no opinion on which of these is better:

```csharp
Expect(1).To.Not.Equal(2);
Expect(1).Not.To.Equal(2);
```

All `NExpect` does is prevent silliness like:

```csharp
Expect(1).Not.To.Not.Equal(1);
```

`NExpect` is designed around user-extensibility as one of the primary goals. As such, there are some "dangling" words, like `A`, `An`, and `Have` so
that the user can provide her own expressive extensions:

```csharp
var dog = animalFactory.CreateDog();
Expect(dog).Not.To.Be.A.Cat();
Expect(dog).To.Be.A.Dog();
Expect(dog).To.Be.A.Mammal();
```

Where the user can use Matchers or Composition to provide the logic for the `Cat`, `Dog`, and `Mammal` extension method assertions. `NExpect` doesn't actually provide extensions on these "danglers" -- they're literally just there for custom extension.

Whilst running coverage, I found that one variant of `Contain` wasn't covered, and when I wrote a test to go through all three (positive, negative, alt. negative), I found that there were missing implementations! Which I naturally implemented (:

## Using coverage to make your code better

Coverage reports like those generated by `dotCover` and the combination of `OpenCover` and `ReportGenerator` can not only give you confidence in your code and a fuzzy feeling inside at a number which shows that you do care about automated testing for your code -- they can also help you to make your code (and tests) better. And make _you_ better, going forward, because you learn more about the mistakes you make along the way.

If you want to get started relatively easily and you're in the .net world, you can use [gulp-tasks](https://github.com/fluffynuts/gulp-tasks) as a submodule in your git repo. Follow the instructions in the `start` folder and get to a point where you can run `npm run gulp cover-dotnet` (or make this your `test` script in `package.config`). This should:
- build your project
- run your tests, using `OpenCover` and `NUnit`
- generate html reports using `ReportGenerator`, under a `buildreports` folder

You can always check out [NExpect](https://github.com/fluffynuts/NExpect) to see how I get it done there (:

No comments:

Post a Comment

What's new in PeanutButter?

Retrieving the post... Please hold. If the post doesn't load properly, you can check it out here: https://github.com/fluffynuts/blog/...