Sunday, 8 October 2017

Everything sucks. And that's OK.

There is no perfect code, no perfect language, no perfect framework or methodology. Everything is, in some way, flawed.

This realisation can be liberating -- when you accept that everything is flawed in some way, it puts aside petty arguments about the best language, editor, IDE, framework, database -- whatever some zealot is pedalling as being the ultimate solution to your problems. It also illuminates the need for all of the different options out there -- new programming languages are born sometimes on a lark, but often because the creator wanted to express themselves and their logical intent better than they could with the languages that they knew. Or perhaps they wanted to remove some of the complexity associated with common tasks (such as memory allocation and freeing or asynchronous logic). Whatever the reason, there is (usually) a valid one.

The same can be said for frameworks, database, libraries -- you name it. Yes, even the precious pearls that you've hand-crafted into existence. 

In fact, especially those.

We can be blind to the imperfections in our own creations, sometimes it's just ego in the way. Sometimes it's just a blind spot. Sometimes it's because we tie our own self-value to the things we create.

"You are not your job, you're not how much money you have in the bank. You are not the car you drive. You're not the contents of your wallet. You are not your fucking khakis."

For the few that didn't recognise the above, it was uttered by Tyler Durden from the cult classic Fight Club. There are many highlights one could pick from that movie, but this one has been on my mind recently. It refers to how people define themselves by their possessions, finding their identity in material items which are, ultimately, transient.

Much like how we're transient, ultimately unimportant in the grand scale of space and time that surrounds us. Like our possessions, we're just star stuff, on loan from the universe whilst we experience a tiny fraction of it.

I'd like to add another item to the list above:


You are not the code you have written.



This may be difficult to digest. It may stick in your throat, but some freedom comes in accepting this.

As a developer, you have to be continually learning, continually improving, just to remain marginally relevant in the vast expanse of languages, technologies, frameworks, companies and problems in the virtual sea in which our tiny domains float. If you're not learning, you're left behind the curve. Stagnation is a slow death which can be escaped by shaking up your entire world (eg forcing learning by hopping to a new company) or embraced by simply accepting it as your fate as you move into management, doomed to observe others actually living out their passions as they create new things and you report on it. From creator to observer. You may be able to balance this for a while, you may even have the satisfaction of moving the pieces on the board, but you've given up something of yourself, some part of your creator spirit.

Or at least that's how it looks from the outside. And that's pretty-much how I've heard it described from the inside. I wouldn't know, personally. I'm too afraid to try. I like making things.

This journey of constant learning and improvement probably applies to other professions, especially those requiring some level of creativity and craftsmanship from the member. You either evolve to continue creating or your fade away into obscurity.

And if you are passionate about what you're doing, if you are continually learning, continually hungry to be better at what you do, continually looking for ways to evolve your thought processes and code, then inevitably, you have to look back on your creations of the past and feel...

Displeased.

Often I can add other emotions to the mix: embarrassed, appalled, even loathing. But at the very least, looking back on something you've created, you should be able to see how much better you'd be able to do it now. This doesn't mean that your past creations have no value -- especially if they are actually useful and in use. It just means that a natural part of continual evolution is the realisation that everything you've ever done, everything you ever will do, at some point, upon reflection, sucks.

It starts when you recognise that code you wrote a decade ago sucks. It grows as you realise that code you wrote 5 years, even 2 years ago sucks. It crescendos as you realise that the code you wrote 6 months ago sucks -- indeed, even the code you wrote a fortnight ago could be done better with what you've learned in the last sprint. 

It's not a bad thing. The realisation allows you to divorce your self-worth from your creations. If anything, you could glean some of your identity from the improvements you've been able to make throughout your career. Because, if you realise that your past creations, in some way or another, all suck, if you realise that this truth will come to pass for your current and future creations, then you have to also come to conclusion that realising deficiencies in your past accomplishments highlights your own personal evolution.

If you look back over your past code and you don't feel some kind of disappointment, if you can't point out the flaws in your prior works, then I'd have to conclude that you're either stagnating or you're deluding yourself -- probably out of pride, perhaps because you've attached your self-worth to the creations you've made. Neither is a good place to be -- you're either becoming irrelevant or you're unteachable and you will become irrelevant.

If you have come to accept this truth, it also means that you can accept criticism with valid reasoning as an opportunity to learn instead of an attack on your character. 

I watched a video where the speaker posits that code has two audiences: the compiler and your co-workers. The compiler doesn't care about style or readability. The compiler simply cares about syntactical correctness. I've tended to take this a little further with the mantra:


Code is for co-workers, not compilers.



I can't claim to be the original author -- but I also can't remember where I read it. In this case, it does boil down to a simple truth: if your code is difficult for someone else to extend and maintain, it may not be the sparkling gem you think it is. If a junior tasked with updating the code gets lost, calls on a senior for help and that senior can't grok your code -- it's not great. If that senior points that out, then they are doing their job as the primary audience of that code. This is not a place for conflict -- it is a place for learning. Yes, sometimes code is just complex. Sometimes language or framework features are not obvious to an outside programmer looking in. But it's unusual to find complex code which can't be made understandable. Einstein said:

“If you can't explain it to a six year old, you don't understand it yourself.” 

And I'd say this extends to your code -- if your co-workers can't extend it, learn of the (potentially complex) domain, work with the code you've made, then you're missing a fundamental reason for the code to exist: explaining the domain to others through modelling it and solving problems within it.

Working with people who haven't accepted this is difficult -- you can't point out flaws that need to be fixed or, in extreme cases, even fix them without inciting their wrath. You end up having to guerilla-code to resolve code issues or just bite your tongue as you quietly sweep up after them. Or worse -- working around the deficiencies in their code because they insist you depend on it whilst simultaneously denying you the facility to better it.

People like this get easily offended and may use whatever power is available to them to trip you up -- after all, anything you've said about their code or done to improve it is, from their viewpoint, a personal attack. Expect them to get personal with you too, perhaps even pulling your character down in front of your team. At some point, you begin fear that you might have to actually work with them or on something they've worked on because you just know that the drama will come.

There's not a lot you can do about it and the only solace you can find is that you know that they are fading away into irrelevance -- and hopefully you aren't. Also, at some point, we probably all felt the pang when someone pointed out a flaw in our code. Hopefully, as we get older and wiser, this falls away. Personally, I think that divorcing your self-image from your creations, allowing yourself to be critical of the things you've made -- this is one of the marks of maturity that defines the difference between senior developer and junior developer. Not to say that a junior can't master this already -- more to say that I question the "seniority" of a senior who can't do this. It's just one of the skills you need to progress. Like typing or learning new languages.

All of this isn't to say that you can't take pride in your work or that there's no point trying to do your best. We're on this roundabout trying to learn more, to evolve, to be better. You can only get better from a place where you were worse. You can also feel pride in the good parts of what you've created, as long as that is tempered by a realistic, open view on the not-so-good parts. 

You may even like something you've made, for a while, at least. I'm currently in a bit of a honeymoon phase with NExpect (GitHub, Nuget): it's letting me express myself better in tests, it's providing value to handful of other people -- but this too shall pass. At some point, I'm going to look at the code and wonder what I was thinking. I'm going to see a more elegant solution, and I'm going to see the inadequacies of the code. Indeed, I've already experienced this in part -- but it's been overshadowed by the positive stuff that I've experienced, so I'm not quite at the loathing state yet.

You are not your fucking code. When it's faults become obvious, have the grace to learn instead of being offended. 

Thursday, 21 September 2017

NExpect level 3: you are the key component

In previous posts, I've examined how to do simple and collection-based assertions with NExpect.

These have enabled two of the design goals of NExpect:
  • Expect(NExpect).To.Be.Readable();
    • Because code is for co-workers, not compilers. And your tests are part of your documentation.
  • Expect(NExpect).To.Be.Expressive();
    • Because the intent of a test should be easy to understand. The reader can delve into the details when she cares to.

Now, we come on to the third goal, inspired by Jasmine: easy user extension of the testing framework to facilitate expressive testing of more complex concepts.

Most of the "words" in NExpect can be "attached to" with extension methods. So the first question you have to ask is "how do I want to phrase my assertion?". You could use the already-covered .To or .To.Be:

internal static class Matchers
{
  internal static void Odd(
    this IBe be
  )
  {
    be.AddMatcher(actual =>
    {
      var passed = actual % 2 == 1;
      var message = passed
                    ? $"Expected {actual} not to be odd"
                    : $"Expected {actual} to be odd";
      return new MatcherResult(
        passed,
        message
      );
    });
  }
}

The above extension enables the following test:

  [Test]
  public void ILikeOddNumbers()
  {
    // Assert
    Expect(1).To.Be.Odd();
    Expect(2).Not.To.Be.Odd();
  }

There are a few concepts to unpack:

.AddMatcher()

This is how you add a "matcher" (term borrowed from Jasmine... Sorry, I couldn't come up with a better name, so it stuck) to a grammar continuation like .To or .Be. Note that we just create an extension method on IBe<T> where T is the type you'd like to test against, and the internals of that extension basically just add a matcher. This takes in a Func<T, IMatcherResult> so your actual matcher needs to return an IMatcher result, which really is just a flag about whether or not the test passed and the message to display if the test failed.

Pass or fail?

This is the heart of your assertion and can be as tricky as you like. Obviously, your matcher could also have multiple exit strategies with specific messages about each possible failure. But the bit that takes a little getting used to is that you're writing a matcher which could be used with .Not in the grammar chain, so you should cater for that eventuality.

Meaningful messages

There's a simple strategy here: get the passed value as if you're doing a positive assertion (ie, as if there is no .Not in the chain) and set your message as follows:
  • If you've "passed", the message will only be shown if the expectation was negated (ie, there's a .Not in the chain), so you need to negate your message (see the first message above)
  • If you've "failed", the message will only be show if the message was not negated, so you need to show the relevant message for that scenario.
It turns out that (mostly), we can write messages like so:

internal static class Matchers
{
  internal static void Odd(
    this IBe be
  )
  {
    be.AddMatcher(actual =>
    {
      var passed = actual % 2 == 1;
      var message = 
        $"Expected {actual} {passed ? "not " : ""}to be odd";
      return new MatcherResult(
        passed,
        message
      );
    });
  }
}

Doing this is tedious enough that NExpect offers a .AsNot() extension for booleans:

internal static class Matchers
{
  internal static void Odd(
    this IBe be
  )
  {
    be.AddMatcher(actual =>
    {
      var passed = actual % 2 == 1;
      var message = 
        $"Expected {actual} {passed.AsNot()}to be odd";
      return new MatcherResult(
        passed,
        message
      );
    });
  }
}

Also, NExpect surfaces a convenience extension method for printing values: .Stringify() which will:
  • print strings with quotes
  • null values as "null" 
  • and objects and collections in a "JSON-like" format. 
Use as follows:

internal static class NumberMatchers
{
  internal static void Odd(
    this IBe be
  )
  {
    be.AddMatcher(actual =>
    {
      var passed = actual % 2 == 1;
      var message = 
        $"Expected {actual.Stringify()} {passed.AsNot()}to be odd";
      return new MatcherResult(
        passed,
        message
      );
    });
  }
}

You'll have to think (a little) about your first matcher, but it starts getting easier the more you write (:
Now you can write more meaningful tests like those in the demo project.

The above is fine if, like me, you can see a pattern you'd like to test for really easily (if you have a kind of "matcher-first" mindset), but does provide a minor barrier to entry for those who like to write a few tests and refactor out common assertions.

Not to worry: NExpect has you covered with .Compose():

internal static class PersonMatchers
{
  internal static void Jane(
    this IA a
  )
  {
     a.Compose(actual =>
     {
        Expect(actual.Id).To.Equal(1);
        Expect(actual.Name).To.Equal("Jane");
     });
  }

// ....
  [Test]
  public void TestJane()
  {
    // Arrange
    var person = new Person() { Id = 1, Name = "Jane", Alive = true };

    // Assert
    Expect(person).To.Be.A.Jane();
  }
}

.Compose uses [CallerMemberName] to determine the name of your matcher and attempts to throw a useful UnmetExpectationException when one of your sub-expectations fails. You may also provide a Func to .Compose to generate a more meaningful message.

These are some rather simple examples -- I'm sure you can get much more creative! I know I have (:

Some parts of NExpect are simply there to make fluent extension easier, for example:
  • To.Be.A
  • To.Be.An
  • To.Be.For
NExpect will be extended with more "grammar parts" as the need arises. If NExpect is missing a "grammar dangler" that you'd like, open an issue on GitHub.
 

NExpect level 2: testing collections

In a prior post, I covered simple value testing with NExpect. In this post, I'd like to delve into collection assertions, since they are fairly common.

First, the simplest: asserting that a collection contains a desired value:


  [Test]
  public void SimpleContains()
  {
    // Arrange
    var collection = new[] { "a", "b", "c" };
  
    // Assert
    Expect(collection).To.Contain("a");
  }

This is what you would expect from any other assertions framework.

Something has always bothered me about this kind of testing though. In particular, the test above passes just as well as this one:


  [Test]
  public void MultiContains()
  {
    // Arrange
    var collection = new[] { "a", "b", "c", "a" };
  
    // Assert
    Expect(collection).To.Contain("a");
  }

And yet they are not functionally equivalent from where I stand. Which makes the test feel a little flaky to me. This is why NExpect actually didn't even have the above assertion first. Instead, I was interested in being more specific:

  [Test]
  public void SpecificContains()
  {
    // Arrange
    var collection = new[] { "a", "b", "c", "a" };
  
    // Assert
    Expect(collection).To.Contain.Exactly(1).Equal.To("b");
    Expect(collection).To.Contain.At.Least(1).Equal.To("c");
    Expect(collection).To.Contain.At.Most(2).Equal.To("a");
  }

Now my tests are speaking specifically about what they expect.

Sometimes you just want to test the size of a collection, but you don't really care if it's an IEnumerable<T>, a List<T> or an array. Other testing frameworks may let you down, requiring you to write a test against the Count or Length property, meaning that when your implementation changes from returning, eg, List<T> to array (which may be smart: List<T> is not only a heavier construct but implies that you can add to the collection), your tests will fail for no really good reason -- your implementation still returns 2 FrobNozzles, so who cares if the correct property to check is Length or Count? I know that I don't.

That's Ok, NExpect takes away the care of having to consider that nuance and allows you to spell out what you actually mean:

  [Test]
  public void SizeTest()
  {
    // Arrange
    var collection = new[] { "a", "b", "c" };
    var lonely = new[] { 1 };
    var none = new bool[0];

    // Assert
    Expect(collection).To.Contain.Exactly(3).Items();
    Expect(lonely).To.Contain.Exactly(1).Item();

    Expect(none).To.Contain.No().Items();
    Expect(none).Not.To.Contain.Any().Items();
    Expect(none).To.Be.Empty();
  }

Note that the last three are functionally equivalent. They are just different ways to say the same thing. NExpect is designed to help you express your intent in your tests, and, as such, there may be more than one way to achieve the same goal:

  [Test]
  public void AnyWayYouLikeIt()
  {
    // Assert
    Expect(1).Not.To.Equal(2);
    // ... is exactly equivalent to
    Expect(1).To.Not.Equal(2);

    Expect(3).To.Equal(3);
    // ... is exactly equivalent to
    Expect(3).To.Be.Equal.To(3);
  }

There are bound to be other examples. The point is that NExpect attempts to provide you with the language to write your assertions in a readable manner without enforcing a specific grammar.

Anyway, on with collection testing!

You can test for equality, meaning items match at the same point in the collection (this is not reference equality testing on the collection, but would equate to reference equality testing on items of class type or value equality testing on items of struct type:

  [Test]
  public void CollectionEquality()
  {
    // Assert
    Expect(new[] { 1, 2, 3 })
      .To.Be.Equal.To(new[] { 1, 2, 3 });
  }

You can also test out-of-order:

  [Test]
  public void CollectionEquivalence()
  {
    // Assert
    Expect(new[] { 3, 1, 2 })
      .To.Be.Equivalent.To(new[] { 1, 2, 3 });
  }

Which is all nice and dandy if you're testing value types or can do reference equality testing (or at least testing where each object has a .Equals override which does the comparison for you). It doesn't help when you have more complex objects -- but NExpect hasn't forgotten you there: you can do deep equality testing on collections too:

  [Test]
  public void CollectionDeepEquality()
  {
    var input = new[] {
      new Person() { Id = 1, Name = "Jane", Alive = true },
      new Person() { Id = 2, Name = "Bob", Alive = false }
    };
    // Assert
    Expect(input.AsObjects())
      .To.Be.Deep.Equal.To(new[] 
        {
          new { Id = 1, Name = "Jane", Alive = true },
          new { Id = 2, Name = "Bob", Alive = false }
        });
  }

Note that, much like the points on "Who's line is it, anyway?", the types don't matter. This is deep equality testing (: However, we did need to "dumb down" the input collection to a collection of objects with the provided .AsObjects() extension method so that the test would compile, otherwise there's a type mismatch at the other end. Still, this is, imo, more convenient than the alternative: item-for-item testing, property-by-property.

The above is incomplete without equivalence, of course:

  [Test]
  public void CollectionDeepEquivalence()
  {
    var input = new[] {
      new Person() { Id = 1, Name = "Jane", Alive = true },
      new Person() { Id = 2, Name = "Bob", Alive = false }
    };
    // Assert
    Expect(input.AsObjects())
      .To.Be.Deep.Equivalent.To(new[] {
        new { Id = 2, Name = "Bob", Alive = false },
        new { Id = 1, Name = "Jane", Alive = true }
      });
  }

And intersections are thrown in for good measure:

  [Test]
  public void CollectionIntersections()
  {
    var input = new[] {
      new Person() { Id = 1, Name = "Jane", Alive = true },
      new Person() { Id = 2, Name = "Bob", Alive = false }
    };
    // Assert
    Expect(input.AsObjects())
      .To.Be.Intersection.Equivalent.To(new[] {
        new { Id = 2, Name = "Bob" },
        new { Id = 1, Name = "Jane" }
      });
    Expect(input.AsObjects())
      .To.Be.Intersection.Equivalent.To(new[] {
        new { Id = 1, Name = "Jane" },
        new { Id = 2, Name = "Bob" }
      });
  }

You can also test with a custom IEqualityComparer<T>:

  [Test]
  public void CollectionIntersections()
  {
    var input = new[] {
      new Person() { Id = 1, Name = "Jane", Alive = true },
      new Person() { Id = 2, Name = "Bob", Alive = false }
    };
    // Assert
    Expect(input)
      .To.Contain.Exactly(1).Equal.To(
        new Person() { Id = 2, Name = "Bob" }, 
        new PersonEqualityComparer()
    );
  }

or with a quick-and-dirty Func<T>:

  [Test]
  public void CollectionIntersections()
  {
    var input = new[] {
      new Person() { Id = 1, Name = "Jane", Alive = true },
      new Person() { Id = 2, Name = "Bob", Alive = false }
    };
    // Assert
    Expect(input.AsObjects())
      .To.Contain.Exactly(1).Matched.By(
        p => p.Id == 1 && p.Name == "Jane"
      );
  }

And all of this is really just the start. The real expressive power of NExpect comes in how you extend it.

But more on that in the next episode (:

NExpect level 1: testing objects and values

I recently introduced NExpect as an alternative assertions library. I thought it might be nice to go through usage, from zero to hero.

NExpect is available for .NET Framework 4.5.2 and above as well as anything which can target .NET Standard 1.6 (tested with .NET Core 2.0)

So here goes, level 1: testing objects and values.

NExpect facilitates assertions (or, as I like to call them: expectations) against basic value types in a fairly unsurprising way:


  [Test]
  public void SimplePositiveExpectations
  {
    // Arrange
    object obj = null;
    var intValue = 1;
    var trueValue = true;
    var falseValue = false;

    // Assert
    Expect(obj).To.Be.Null();
    Expect(intValue).To.Equal(1);
    Expect(trueValue).To.Be.True();
    Expect(falseValue).To.Be.False();
  }

So far, nothing too exciting or unexpected there. NExpect also caters for negative expectations:

  [Test]
  public void SimpleNegativeExpectations
  {
    // Arrange
    object obj = new object();
    var intValue = 1;
    var trueValue = true;
    var falseValue = false;

    // Assert
    Expect(obj).Not.To.Be.Null();
    Expect(intValue).Not.To.Equal(2);
    Expect(trueValue).Not.To.Be.False();
    Expect(falseValue).Not.To.Be.True();

    Expect(intValue).To.Be.Greater.Than(0);
    Expect(intValue).To.Be.Less.Than(10);
  }

(Though, in the above, I'm sure we all agree that the boolean expectations are neater without the .Not).

Expectations carry type forward, so you won't be able to, for example:

  [Test]
  public void ExpectationsCarryType
  {
    Expect(1).To.Equal("a");  // does not compile!
  }

However, expectations around numeric values perform upcasts in much the same way that you'd expect in live code, such that you can:


  [Test]
  public void ShouldUpcastAsRequired()
  {
    // Arrange
    int a = 1;
    byte b = 2;
    uint c = 3;
    long d = 4;

    // Assert 
    Expect(b).To.Be.Greater.Than(a);
    Expect(c).To.Be.Greater.Than(b);
    Expect(d).To.Be.Greater.Than(a);
  }

All good and well, but often we need to check that a more complex object has a bunch of expected properties.

.Equal is obviously going to do reference-equality testing for class types and value equality testing for struct types. We could:

  [Test]
  public void TestingPropertiesOneByOne()
  {
    // Arrange
    var person = new {
      Id = 1,
      Name = "Jane",
      Alive = true
    };

    // Assert
    Expect(person.Id).To.Equal(1);
    Expect(person.Name).To.Equal("Jane");
    Expect(person.Alive).To.Be.True();
  }

But that kind of test, whilst perfectly accurate, comes at a cognitive overhead for the reader. Ok, perhaps not much overhead in this example, but imagine if that person had come from another method:

  [Test]
  public void TestingPropertiesOneByOne()
  {
    // Arrange
    var sut = new PersonRepository();
    
    // Act
    var person = sut.FindById(1);

    // Assert
    Expect(person).Not.To.Be.Null();
    Expect(person.Id).To.Equal(1);
    Expect(person.Name).To.Equal("Jane");
    Expect(person.Alive).To.Be.True();
  }

In this case, we'd expect the result to also have a defined type, not some anonymous type. It would be super-convenient if we could do deep equality testing. Which we can:

  [Test]
  public void DeepEqualityTesting()
  {
    // Arrange
    var sut = new PersonRepository();
    
    // Act
    var person = sut.FindById(1);

    // Assert
    Expect(person).To.Deep.Equal(new {
      Id = 1,
      Name = "Jane",
      Alive = 1
    });
  }
This exposes our test for what it's really doing: when searching for the person with the Id of 1, we should get back an object which describes Jane in our system. Our test is speaking about intent, not just confirming value equality. Notice that the type of the object used for comparison doesn't matter, and this holds for properties too.

Note that I omitted the test for null in the second variant. You don't need it because the deep equality tester will deal with that just fine. However, you are obviously still free to include it for the sake of clarity.

NExpect gets this "for free" by depending on a git submodule of PeanutButter and importing only the bits it needs. In this way, I can re-use well-tested code and consumers don't have to depend on another Nuget package. Seems like a win to me.

What if we didn't care about all of the properties? What if we only cared about, for example, Name and Id. A dead Jane is still a Jane, right?

NExpect has you covered:

  [Test]
  public void IntersectionEqualityTesting()
  {
    // Arrange
    var sut = new PersonRepository();
    
    // Act
    var person = sut.FindById(1);

    // Assert
    Expect(person).To.Intersection.Equal(new {
      Id = 1,
      Name = "Jane"
    });
  }


We can also test the type of a returned object:

  [Test]
  public void TypeTesting()
  {
    // Arrange
    var sut = new PersonRepository();

    // Act
    var person = sut.FindById(1);

    // Assert
    Expect(person).To.Be.An.Instance.Of<Person>();
    Expect(person).To.Be.An.Instance.Of<IPerson>();
    Expect(person).To.Be.An.Instance.Of<BaseEntity>();
  }

We can test for the exact type (Person), implemented interfaces (IPerson) and base types (BaseEntity).

Next, we'll explore simple collection testing. Tune in for Level 2!

Monday, 18 September 2017

Fluent, descriptive testing with NExpect

About a year or so ago, I discovered AssertionHelper, a base class provided by NUnit which allowed for a more familiar style of testing when one has to bounce back and forth between (Java|Type)Script and C#. Basically, it allows one to use the Expect keyword to start an assertion, eg:

[TestFixture]
public class TestSystem: AssertionHelper
{
  [Test]
  public void TestSomething()
  {
    Expect(true, Is.True);
  }
}

And, for a while, that sufficed. But there were some aspects of this which bothered me:
  1. This was accomplished via inheritence, which just seems like "not the right way to do it".
    There have been times that I've used inheritence with test fixtures -- specifically for testing different implementations of the same interface, and also when I wrote a base class which made EF-based testing more convenient.
    Having to inherit from AssertionHelper means that I have had to push that inheritence further down, or lose the syntax.
  2. The tenses are wrong: Expect is future-tense, and Is.True is present-tense. Now, I happen to like the future-tensed Expect syntax -- it really falls in line with writing your test first:
    1. I write code to set up the test
    2. I write code to run the system-under-test
    3. I expect some results
    4. I run the test
    5. It fails!
    6. I write the code
    7. I re-run the test
    8. It passes! (and if not, I go back to #7 and practice my sad-panda face)
    9. I refactor
A few months after I started using it, a bigger bother arrived: the NUnit team was deprecating AssertionHelper because they didn't think that it was used enough in the community to warrant maintenance. A healthy discussion ensued, wherein I offered to maintain AssertionHelper and, whilst no-one objected, the discussion seemed to be moth-balled a little (things may have changed by now). Nevertheless, my code still spewed warnings and I hate warnings. I suppressed them for a while with R# comments and #pragma, but I couldn't deal -- I kept seeing them creep back in again with new test fixtures.

This led me to the first-pass: NUnit.StaticExpect where I'd realised that the existing AssertionHelper syntax could be accomplished via
  1. A very thin wrapper around Assert.That using a static class with static methods
  2. C#'s using static
This meant that the code above could become:

using static NUnit.StaticExpect.Expectations;

[TestFixture]
public class TestSystem
{
  [Test]
  public void TestSomething()
  {
    Expect(true, Is.True);
  }
}

Which was better in that:
  1. I didn't have the warning about using the Obsoleted AssertionHelper
  2. I didn't have to inherit from AssertionHelper
But there was still that odd future-present tense mix. So I started hacking about on NExpect
NExpect states as its primary goals that it wants to be:
  • Readable
    • Tests are easier to digest when they read like a story. Come to think of it, most code is. Code has two target audiences: the compiler and your co-workers (which includes you). The compiler is better at discerning meaning in a glob of logic than a human being is, which is why we try to write expressive code. It's why so many programming languages have evolved as people have sought to express their intent better.
      Your tests also form part of your documentation -- for that one target audience: developers.
  • Expressive
    • Because the intent of a test should be easy to understand. The reader can delve into the details when she cares to.
      Tests should express their intention. A block of assertions proving that a bunch of fields on one object match those on another may be technically correct and useful, but probably has meaning. Are we trying to prove that the object is a valid FrobNozzle? It would be nice if the test could say so.
  • Extensible
    • Because whilst pulling out a method like AssertIsAFrobNozzle is a good start, I was enamoured with the Jasmine way, along the lines of:

      expect(result).toBeAFrobNozzle();

      Which also negated well:

      expect(result).not.toBeAFrobNozzle();


In NExpect, you can write an extension method FrobNozzle(), dangling off of IA<T>, and write something like:
Expect(result).To.Be.A.FrobNozzle();
// or, negated
Expect(result).Not.To.Be.A.FrobNozzle();
// or, negated alternative
Expect(result).To.Not.Be.A.FrobNozzle(); The result is something which is still evolving, but is already quite powerful and useful -- and trivial to extend. I suggest checking out the demo project I made showing the evolution 
  1. from "olde" testing (Assert.AreEqual), 
  2. through the better, new Assert.That syntax of NUnit (which is quite expressive, but I really, really want to Expect and I want to be able to extend my assertions language, two features I can't get with Assert.That)
  3. through expression via AssertionHelper
  4. then NUnit.StaticExpect and finally 
  5. NExpect, including some examples of basic "matchers" (language borrowed from Jasmine): extension methods which make the tests read easier and are easy to create and re-use.
For the best effect, clone the project, reset back to the first commit and "play through" the commits.

NExpect has extensibility inspired by Jasmine and a syntax inspired by Chai (which is a little more "dotty" than Jasmine).

I've also had some great contributions from Cobus Smit , a co-worker at Chillisoft who has not only helped with extending the NExpect language, but also through trial-by-fire usage in his project.


This week in PeanutButter

Nothing major, really -- two bugfixes, which may or may not be of interest:

  1. PropertyAssert.AreEqual allows for comparison of nullable and non-nullable values of the same underlying type -- which especially makes sense when the actual value being tested (eg nullable int) is being compared with some concrete value (eg int). 
  2. Fix for the object extension DeepClone() -- some production code showed that Enum values weren't being copied correctly. So that's fixed now.
    If you're wondering what this is, DeepClone() is an extension method on all objects to provide a copy of that object, deep-cloned (so all reference types are new types, all value types are copied), much like underscore's, _.cloneDeep() for Javascript. This can be useful for comparing a "before" and "after" from a bunch of mutations, especially using the DeepEqualityTester from PeanutButter.Utils or the object extension DeepEquals(), which does deep equality testing, much like you'd expect.

There's also been some assertion upgrading -- PeanutButter consumes, and helps to drive NExpect, an assertions library modelled after Chai for syntax and Jasmine for user-space extensibility. Head on over to Github to check it out -- though it's probably time I wrote something here about it (:

Friday, 4 August 2017

This week in PeanutButter

Ok, so I'm going to give this a go: (semi-)regularly blogging about updates to PeanutButter in the hopes that perhaps someone sees something useful that might help out in their daily code. Also so I can just say "read my blog" instead of telling everyone manually ^_^

So this week in PeanutButter, some things have happened:

  • DeepEqualityTester fuzzes a little on numerics -- so you can compare numerics of equal value and different type correctly (ie, (int)2 == (decimal)2). This affects the {object}.DeepEquals() and PropertyAssert.AreDeepEqual() methods.
  • DeepEqualityTester can compare fields now too. PropertyAssert.DeepEquals will not use this feature (hey, the name is PropertyAssert!), but {object}.DeepEquals() will, by default -- though you can disable this.
  • DuckTyper could duck Dictionaries to interfaces and objects to interfaces -- but now will also duck objects with Dictionary properties to their respective interfaces where possible.
  • String extensions for convenience:
    • ToKebabCase()
    • ToPascalCase()
    • ToSnakeCase()
    • ToCamelCase()
  • DefaultDictionary<TKey, TValue> - much like Python's defaultdict, this provides a dictionary where you give a strategy for what to return when a key is not present. So a DefaultDictionary<string, bool> could have a default value of true or false instead of throwing exceptions on unknown keys.
  • MergeDictionary<TKey, TValue> - provides a read-only "view" on a collection of similarly typed IDictionary<TKey, TValue> objects, resolving values from the first dictionary they are found in. Coupled with DefaultDictionary<TKey, TValue>, you can create layered configurations with fallback values.
  • DuckTyping can duck from string values to enums
And I'm quite sure the reverse (enum to string) will come for cheap-or-free. So there's that (: You might use this when you have, for example, a Web.Config with a config property "Priority" and you would like to end up with an interface like:

public enum Priorities
{
  Low,
  Medium,
  High
}
public interface IConfig
{
  Priorities DefaultPriority { get; }
}


And a Web.Config line like:
<appSettings>
  <add key="DefaultPriority" value="Medium" />
</appSettings>


Then you could, somewhere in your code (perhaps in your IOC bootstrapper) do:
 var config = WebConfigurationManager.AppSettings.DuckAs<IConfig>();

(This already works for string values, but enums are nearly there (:). You can also use FuzzyDuckAs<T>(), which will allow type mismatching (to a degree; eg a string-backed field can be surfaced as an int) and will also give you freedom with your key names: whitespace and casing don't matter (along with punctuation). (Fuzzy)DuckAs<T>() also has options for key prefixing (so you can have "sections" of settings, with a prefix, like "web.{setting}" and "database.{setting}". But all of that isn't really from this week -- it's just useful for lazy devs like me (:

Everything sucks. And that's OK.

There is no perfect code, no perfect language, no perfect framework or methodology. Everything is, in some way, flawed. This realisati...