Retrieving the post... Please hold. If the post doesn't load properly, you can check it out here: https://github.com/fluffynuts/blog/blob/master/20180825.md
Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts
Saturday, 25 August 2018
PeanutButter.RandomValueGen: the builder pattern & random generation for testing purposes
Thursday, 12 April 2018
# What's in `PeanutButter.Utils`, exactly?
`PeanutButter.Utils` is a package which pretty-much evolved as I had common problems that I was solving day-to-day. People joining a team that I was working on would be exposed to bits of it and, like a virus, those bits would propagate across other code-bases. Some people asked for documentation, which I answered with a middle-ground of `xmldoc`, which most agreed was good enough. People around me got to know of the more useful bits in `PeanutButter.Utils` or would ask me questions like "Does PeanutButter.Utils have something which can do [X]?". I kind of took the ubiquity amongst my team-mates for granted.
Fast-forward a little bit, and I've moved on to another company, where people don't know anything about the time-savers in `PeanutButter.Utils` -- and it occurs to me that that statement probably applies to pretty-much most people -- so I thought it might be worthwhile to have some kind of primer on what you can expect to find in there. An introduction, if you will. I think there's enough to break the content down into sections, so we can start with:
## Disposables
One of the patterns I like most in the .net world is that of `IDisposable`. It's a neat way to ensure that something happens at the end of a block of code irrespective of what goes on _inside_ that code. The code could throw or return early -- it doesn't matter: whatever happens in the `Dispose` method of the `IDisposable` declared at the top of a `using` block will be run. Usually, we use this for clearing up managed resources (eg on database connections), but it struck me that there were some other convenient places to use it. Most generically, if you wanted to run something simple at the start of a block of code and run something else at the end (think of toggling something on for the duration of a block of code), you could use the convienient `AutoResetter` class:
```csharp
using (new AutoResetter(
() => ToggleFeatureOn(),
() => ToggleFeatureOff()))
{
// code inside here has the feature toggled on
}
// code over here doesn't -- and the feature is
// toggled back off again even if the code
// above throws an exception.
```
It's very simple -- but it means that you can get the functionality of an `IDisposable` by writing two little lambda methods.
You can also have a variant where the result from the first lambda is fed into the second:
```csharp
using (new AutoResetter(
() => GetCounterAndResetToZero(),
originalCount => ResetCounterTo(originalCount)))
{
// counter is zero here
}
// counter is reset to original value here
```
Cool.
Other common problems that can be solved with `IDisposable` are:
### Ensuring mutexes / semaphores are reset, even if an exception is encountered
For this, we can use `AutoLocker`:
```csharp
using (new AutoLocker(someMutex))
{
}
using (new AutoLocker(someSemaphore))
{
}
using (new AutoLocker(someSemaphoreLite))
{
}
```
### Temporary files in tests
```csharp
using (var tempFile = new AutoTempFile())
{
File.WriteAllBytes(
Encoding.UTF8.GetBytes("moo, said the cow"),
tempFile.Path
);
// we can run testing code against the file here
}
// file is gone here, like magick!
```
This uses the `Path.GetTempFileName()` system call by default -- so you don't have to care about where the file actually exists. Of course, there are constructor overloads to:
- create the file populated with data (string or bytes)
- create the file in a different location (not the system temp data location)
- create the file with a specific name
`AutoTempFile` also exposes the files contents via properties:
- `StringData` for string contents
- `BinaryData` for a byte[] array
There is also an `AutoTempFolder` if you want a scratch area to work in for a period of time. When it is disposed, it and all it's contents are deleted.
Similarly, `AutoDeleter` is an `IDisposable` which can keep track of multiple files you'd like to delete when it is disposed:
```csharp
using (var deleter = new AutoDeleter())
{
// some files are created, then we can do:
deleter.Add("C:\Some\File");
deleter.Add("C:\Some\Other\File");
}
// and here, those files are deleted. If they can't be
// deleted, (eg they are locked by some process),
// then the error is quietly suppressed.
// `AutoDeleter` works for folders too.
```
### Other disposables
As much as I love the `using` pattern, it can lead to some "arrow code", like this venerable ADO.NET code:
```csharp
using (var conn = CreateDbConnection())
{
conn.Open();
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = "select * from users";
using (var reader = cmd.ExecuteReader())
{
// read from the reader here
}
}
}
```
Sure, many people don't use ADO.NET "raw" like this any more -- it's just an easy example which comes to mind. I've seen far worse "nest-denting" of `using` blocks too.
This can be flattened out a bit with `AutoDisposer`:
```csharp
using (var disposer = new AutoDisposer())
{
var conn = disposer.Add(CreateDbConnection());
conn.Open();
var cmd = disposer.Add(conn.CreateCommand());
cmd.CommandText = "select * from users";
var reader = disposer.Add(conn.ExecuteReader());
// read from the db here
}
// reader is disposed
// cmd is disposed
// conn is disposed
```
`AutoDisposer` disposes of items in reverse-order in case of any disposing dependencies.
So that's part 1 of "What's in `PeanutButter.Utils`?". There are other interesting bits, like:
- extension methods to
- make some operations more convenient
- do conversions
- working with `Stream` objects
- DateTime utilities
- facilitate more functional code (eg `.ForEach` for collections)
- `SelectAsync` and `WhereAsync` let you use async lambdas in your LINQ
- test and manipulate strings
- the `DeepEqualityTester`, which is at the heart of `NExpect`s `.Deep` and `.Intersection` equality testing
- `MemberExpression` helpers
- Reflection tidbits
- reading and writing arbitrary metadata for any object you encounter (think like adding property data to any object)
- Some pythonic methods (`Range` and a (imo) more useful `Zip` than the one bundled in LINQ)
- dictionaries
- `DictionaryWrappingObject` lets you treat any object like you would in Javascript, with text property indexes
- `DefaultDictionary` returns default values for unknown keys
- `MergeDictionary` allows layering multiple dictionaries into one "view"
- `CaseWarpingDictionary` provides a decorator dictionary for when the dictionary you have does indexing with inconvenient case rules
I hope to tackle these in individual posts (:
Monday, 6 November 2017
What's new in PeanutButter
I realise that it's been a while (again) since I've posted an update about new things in
Here are the more interesting changes:
So, yeah, stuff happened (:
PeanutButter (GitHub, Nuget). I've been slack!Here are the more interesting changes:
PeanutButter.Utilsnow has anetstandard2.0target in the package. So you can use those tasty, tasty utils fromnetcore2.0- This facilitated adding a
netstandard2.0target forPeanutButter.RandomGenerators-- so now you can use theGenericBuilderandRandomValueGen, even on complex types. Yay! Spend more time writing interesting code and less time thinking of a name on yourPersonobject (: - Many fixes to
DeepClone(), the extension method for objects, which gives you back a deep-cloned version of the original, including support for collections and cloning the object's actual type, not the type inferred by the generic usage - so a downcast object is properly cloned now. DeepEqualityTestercan be used to test shapes of objects, without caring about actual values -- simply setOnlyCompareShapeto true. Combined with settingFailOnMissingPropertiesto false, a deep equality test between an instance of your model and an instance of an api model can let you know if your models have at least the required fields to satisfy the target api.- Random object generation always could use
NSubstituteto generate instances of interfaces, but now also supports usingPeanutButter.DuckTyping'sCreate.InstanceOf<T>()to create your random instances. Both strategies are determined at runtime from available assemblies -- there are no hard dependencies onNSubstituteorPeanutButter.DuckTyping. - More async IEnumerable extensions (which allow you to await on a Linq query with async lambdas -- but bear in mind that these won't be translated to, for example, Linq-to-Sql code to run at the server-side):
SelectAsyncToArrayAsyncWhereAsyncAggregateAsync
So, yeah, stuff happened (:
Thursday, 21 September 2017
NExpect level 3: you are the key component
Retrieving the post... Please hold.
If the post doesn't load properly, you can check it out here:
https://raw.githubusercontent.com/fluffynuts/blog/master/20170917_NExpectLevel3.md
NExpect level 2: testing collections
Retrieving the post... Please hold.
If the post doesn't load properly, you can check it out here:
https://raw.githubusercontent.com/fluffynuts/blog/master/20170917_NExpectLevel2.md
NExpect level 1: testing objects and values
Retrieving the post... Please hold.
If the post doesn't load properly, you can check it out here:
https://raw.githubusercontent.com/fluffynuts/blog/master/20170917_NExpectLevel1.md
Monday, 18 September 2017
Fluent, descriptive testing with NExpect
Retrieving the post... Please hold.
If the post doesn't load properly, you can check it out here:
https://github.com/fluffynuts/blog/blob/master/20180918_IntroducingNExpect.md
This week in PeanutButter
Nothing major, really -- two bugfixes, which may or may not be of interest:
PropertyAssert.AreEqualallows for comparison of nullable and non-nullable values of the same underlying type -- which especially makes sense when the actual value being tested (eg nullable int) is being compared with some concrete value (eg int).- Fix for the object extension
DeepClone()-- some production code showed that Enum values weren't being copied correctly. So that's fixed now.
If you're wondering what this is,DeepClone()is an extension method on all objects to provide a copy of that object, deep-cloned (so all reference types are new types, all value types are copied), much like underscore's, _.cloneDeep() for Javascript. This can be useful for comparing a "before" and "after" from a bunch of mutations, especially using theDeepEqualityTesterfromPeanutButter.Utilsor the object extensionDeepEquals(), which does deep equality testing, much like you'd expect.
There's also been some assertion upgrading -- PeanutButter consumes, and helps to drive NExpect, an assertions library modelled after Chai for syntax and Jasmine for user-space extensibility. Head on over to Github to check it out -- though it's probably time I wrote something here about it (:
Friday, 4 August 2017
This week in PeanutButter
Ok, so I'm going to give this a go: (semi-)regularly blogging about updates to PeanutButter in the hopes that perhaps someone sees something useful that might help out in their daily code. Also so I can just say "read my blog" instead of telling everyone manually ^_^
So this week in PeanutButter, some things have happened:
And a Web.Config line like:
Then you could, somewhere in your code (perhaps in your IOC bootstrapper) do:
(This already works for string values, but enums are nearly there (:). You can also use FuzzyDuckAs<T>(), which will allow type mismatching (to a degree; eg a string-backed field can be surfaced as an int) and will also give you freedom with your key names: whitespace and casing don't matter (along with punctuation). (Fuzzy)DuckAs<T>() also has options for key prefixing (so you can have "sections" of settings, with a prefix, like "web.{setting}" and "database.{setting}". But all of that isn't really from this week -- it's just useful for lazy devs like me (:
So this week in PeanutButter, some things have happened:
- DeepEqualityTester fuzzes a little on numerics -- so you can compare numerics of equal value and different type correctly (ie, (int)2 == (decimal)2). This affects the {object}.DeepEquals() and PropertyAssert.AreDeepEqual() methods.
- DeepEqualityTester can compare fields now too. PropertyAssert.DeepEquals will not use this feature (hey, the name is PropertyAssert!), but {object}.DeepEquals() will, by default -- though you can disable this.
- DuckTyper could duck Dictionaries to interfaces and objects to interfaces -- but now will also duck objects with Dictionary properties to their respective interfaces where possible.
- String extensions for convenience:
- ToKebabCase()
- ToPascalCase()
- ToSnakeCase()
- ToCamelCase()
- DefaultDictionary<TKey, TValue> - much like Python's defaultdict, this provides a dictionary where you give a strategy for what to return when a key is not present. So a DefaultDictionary<string, bool> could have a default value of true or false instead of throwing exceptions on unknown keys.
- MergeDictionary<TKey, TValue> - provides a read-only "view" on a collection of similarly typed IDictionary<TKey, TValue> objects, resolving values from the first dictionary they are found in. Coupled with DefaultDictionary<TKey, TValue>, you can create layered configurations with fallback values.
- DuckTyping can duck from string values to enums
public enum Priorities
{
Low,
Medium,
High
}
public interface IConfig
{
Priorities DefaultPriority { get; }
}
And a Web.Config line like:
<appSettings>
<add key="DefaultPriority" value="Medium" />
</appSettings>
Then you could, somewhere in your code (perhaps in your IOC bootstrapper) do:
var config = WebConfigurationManager.AppSettings.DuckAs<IConfig>();
(This already works for string values, but enums are nearly there (:). You can also use FuzzyDuckAs<T>(), which will allow type mismatching (to a degree; eg a string-backed field can be surfaced as an int) and will also give you freedom with your key names: whitespace and casing don't matter (along with punctuation). (Fuzzy)DuckAs<T>() also has options for key prefixing (so you can have "sections" of settings, with a prefix, like "web.{setting}" and "database.{setting}". But all of that isn't really from this week -- it's just useful for lazy devs like me (:
Saturday, 25 March 2017
C# Evolution
(and why you should care)
I may be a bit of a programming language nut. I find different languages interesting not just because they are semantically different or because they offer different features or even because they just look pretty (I'm lookin' at you, Python), but because they can teach us new things about the languages we already use.
I'm of the opinion that no technology with any staying power has no value to offer. In other words, if the tech has managed to stay around for some time, there must be something there. You can hate on VB6 as much as you want, but there has to be something there to have made it dominate desktop programming for the years that it did. That's not the focus of this discussion though.
Similarly, when new languages emerge, instead of just rolling my eyes and uttering something along the lines of "Another programming language? Why? And who honestly cares?", I prefer to take a step back and have a good long look. Creating a new language and the ecosystem required to interpret or compile that language is a non-trivial task. It takes a lot of time and effort, so even when a language seems completely impractical, I like to consider why someone might spend the time creating it. Sure, there are outliers where the only reason is "because I could" (I'm sure Shakespeare has to be one of those) or "because I hate everyone else" (Brainfuck, Whitespace and my favorite to troll on, Perl -- I jest, because Perl is a powerhouse; I just haven't ever seen a program written in Perl which didn't make me cringe, though Larry promises that is supposed to change with Perl 6).
Most languages are born because someone found the language they were dealing with was getting in the way of getting things done. A great example is Go, which was dreamed up by a bunch of smart programmers who had been doing this programming thing for a while and really just wanted to make an ecosystem which would help them to get stuff done in a multi-core world without having to watch out for silly shit like deadlocks and parallel memory management. Not that you can't hurt yourself in even the most well-designed language (indeed, if you definitely can't hurt yourself in the language you're using, you're probably bound and gagged by it -- but I'm not here to judge what you're into).
Most languages are born because someone found the language they were dealing with was getting in the way of getting things done. A great example is Go, which was dreamed up by a bunch of smart programmers who had been doing this programming thing for a while and really just wanted to make an ecosystem which would help them to get stuff done in a multi-core world without having to watch out for silly shit like deadlocks and parallel memory management. Not that you can't hurt yourself in even the most well-designed language (indeed, if you definitely can't hurt yourself in the language you're using, you're probably bound and gagged by it -- but I'm not here to judge what you're into).
Along the same lines, it's interesting to watch the evolution of languages, especially as languages evolve out of fairly mundane, suit-and-tie beginnings. I feel like C# has done that -- and will continue to do so. Yes, a lot of cool features aren't unique or even original -- deconstruction in C#7 has been around for ages in Python, for example -- but that doesn't make them any less valuable.
I'm going to skip some of the earlier iterations and start at where I think it's interesting: C#5. Please note that this post is (obviously) opinion. Whilst I'll try to cover all of the feature changes that I can dig up, I'm most likely going to focus on the ones which I think provide the programmer the most benefit.
C#5
C#5 brought async features and caller information. Let's examine the latter before I offer up what will probably be an unpopular opinion on the former.
Caller information allowed providing attributes on optional functional parameters such that if the caller didn't provide a value, the called code could glean
- Caller file path
- Caller line number
- Caller member name
This is a bonus for logging and debugging of complex systems, but also allowed tricks whereby the name of a property could be automagically passed into, for example, an MVVM framework method call in your WPF app. This makes refactor-renaming easier and removes some magic strings, not to mention making debug logging way way simpler. Not ground-breaking, but certainly [CallerMemberName] became a friend of the WPF developer. A better solution for the exact problem of property names and framework calls came with
nameof in C#6, but CallerMember* attributes are still a good thing.
C# 5 also brought
async/await. On the surface, this seems like a great idea. If only C#'s async/await worked anything like the async/await in Typescript, where, under the hood, there's just a regular old promise and no hidden bullshit. C#'s async/await looks like it's just going to be Tasks and some compiler magic under the hood, but there's that attached context which comes back to bite asses more often than a mosquito with a scat fetish. There are some times when things will do exactly what you expect and other times when they won't, just because you don't know enough about the underlying system.That's the biggest problem with
async/await, in my opinion: it looks like syntactic sugar to take away the pain of parallel computing from the unwashed masses but ends up just being trickier than having to learn how to do actual multi-threading. There's also the fickle task scheduler which may decide 8 cores doesn't mean 8 parallel tasks -- but that's OK: you can swap that out for your own task scheduler... as long as you understand enough of the underlying system, again (as this test code demonstrates).Like many problems that arise out of async / parallel programming, tracking down the cause of sporadic issues in code is non-trivial. I had a bit of code which would always fail the first time, and then work like a charm -- until I figured out it was the context that was causing issues, so I forced a null context and the problem stopped. The developer has to start learning about task continuation options and start caring about how external libraries do things. And many developers aren't even aware of when it's appropriate to use
async/await, opting to use it everywhere and just add overhead to something which really didn't need to be async, like most web api controllers. Async/await makes a lot of sense in GUI applications though. Having
async/await around IO-bound stuff in web api calls may be good for high-volume web requests because it allows the worker thread to be re-assigned to handle another request. I have yet to see an actual benchmark showing better web performance from simply switching to async/await for all calls.The time you probably most want to use it is for concurrent IO to shorten the overall request time for a single request. Some thought has to go into this though -- handling too requests concurrently may just end up with many requests timing out instead of a few requests being given a quick 503, indicating that the application needs some help with scaling. In other words, simply peppering your code with async/await could result in no net gain, especially if the code being awaited is hitting the same resource (eg, your database).
Which leads to the second part of C#'s
async/await that I hate: the async zombie apocalypse. Because asking the result of a method marked async to just .Wait() is suicide (I hope you know it is; please, please, please don't do that), async/await patterns tend to propagate throughout code until everything is awaiting some async function. It's the Walking Dead, traipsing through your code, leaving little async/await turds wherever they go.You can use ConfigureAwait() to get around deadlocking on the context selected for your async code -- but you must remember to apply it to all async results if you're trying to "un-async" a block of code. You can also set the synchronization context (I suggest the useful value of null). Like the former workaround, it's hardly elegant.
As much as I hate (fear?)
async/await, there are places where it makes the code clearer and easier to deal with. Mostly in desktop applications, under event-handling code. It's not all bad, but the concept has been over-hyped and over-used (and poorly at that -- like EF's SaveChangesAsync(), which you might think, being async, is thread-safe, and you'd be DEAD WRONG).
Let's leave it here: use
async/await when it provides obvious value. Question it's use at every turn. For every person who loves async/await, there's a virtual alphabet soup of blogs explaining how to get around some esoteric failure brought on by the feature. As with multi-threading in C/C++: "use with caution".C#6
Where C#5 brought low numbers of feature changes (but one of the most damaging), C#6 brought a smorgasbord of incremental changes. There was something there for everyone:
Read-only auto properties made programming read-only properties just a little quicker, especially when the property values came from the constructor. So code like:
public class VideoGamePlumber
{
public string Name { get { return _name; } }
private string _name;
public VideoGamePlumber(string name)
{
_name = name;
}
}
became:
public class VideoGamePlumber
{
public string Name { get; private set; }
public VideoGamePlumber(string name)
{
Name = name;
}
}
but that still leaves the Name property open for change within the VideoGamePlumber class, so better would be the C#6 variant:
public class VideoGamePlumber
{
public string Name { get; }
public VideoGamePlumber(string name)
{
Name = name;
}
}
The Name property can only be set from within the constructor. Unless, of course, you resort to reflection, since the mutability of Name is enforced by the compiler, not the runtime. But I didn't tell you that.
Auto-property initializers seem quite convenient, but I'll admit that I rarely use them, I think primarily because the times that I want to initialize outside of the constructor, I generally want a private (or protected) field, and when I want to set a property at construction time, it's probably getting it's value from a constructor parameter. I don't hate the feature (at all), just don't use it much. Still, if you wanted to:
public class Doggie
{
public property Name { get; set; }
public Doggie()
{
Name = "Rex"; // set default dog name
}
}
becomes:
public class Doggie
{
public property Name { get; set; } = "Rex";
}
You can combine this with the read-only property syntax if you like:
public class Doggie
{
public property Name { get; } = "Rex";
}
but then all doggies are called Rex (which is quite presumptuous) and you really should have just used a constant, which you can't modify through reflection.
Expression-bodied function members can provide a succinct syntax for a read-only, calculated property. However, I use them sparingly because anything beyond a very simple "calculation", eg:
starts to get long and more difficult to read; though that argument is simply countered by having a property like:
public class Person
{
public string FirstName { get; }
public string LastName { get; }
public string FullName => $"{FirstName} {LastName}";
public Person(string firstName, string lastName)
{
FirstName = firstName;
LastName = lastName;
}
}
public class Business
{
public string StreetAddress { get; }
public string Suburb { get; }
public string Town { get; }
public string PostalCode { get; }
public string PostalAddress => GetPostalAddress();
public Business(string streetAddress, string suburb, string Town, string postalCode)
{
StreetAddress = streetAddress;
Suburb = suburb;
Town = town;
PostalCode = postalCode
}
private string GetAddress()
{
return string.Join("\n", new[]
{
StreetAddress,
Suburb,
Town,
PostalCode
});
}
}
Where the logic for generating the full address from the many bits of involved data is tucked away in a more readable method and the property itself becomes syntactic sugar, looking less clunky than just exposing the
GetAddress method.
Index initializers provide a neater syntax for initializing Dictionaries, for example:
var webErrors = new Dictionary()
{
{ 404, "Page Not Found" },
{ 302, "Page moved, but left a forwarding address" },
{ 500, "The web server can't come out to play today" }
};
Can be written as:
private Dictionary webErrors = new Dictionary
{
[404] = "Page not Found",
[302] = "Page moved, but left a forwarding address.",
[500] = "The web server can't come out to play today."
};
It's not ground-breaking, but I find it a little more pleasing on the eye.
Other stuff that I hardly use includes:
Extension Add methods for collection initializers allow your custom collections to be initialized like standard ones. Not a feature I've ever used because I haven't had the need to write a custom collection.
Improved overload resolution reduced the number of times I shook my fist at the
Exception filters made exception handling more expressive and easier to read.
await in catch and finally blocks allows the async/await zombies to stumble into your exception hanlding. Yay.
On to the good bits (that I regularly use) though:
using static made using static functions so much neater -- as if they were part of the class you were currently working in. I don't push static functions in general because using them means that testing anything which uses them has to test them too, but there are places where they make sense. One is in RandomValueGen from PeanutButter.RandomGenerators, a class which provides functions to generate random data for testing purposes. A static import means you no longer have to mention the RandomValueGen class throughout your test code:
using PeanutButter.RandomGenerators;
namespace Bovine.Tests
{
[TestFixture]
public class TestCows
{
[Test]
public void Moo_ShouldBeAnnoying()
{
// Arrange
var cow = new Cow()
{
Name = RandomValueGen.GetRandomString(),
Gender = RandomValueGen.GetRandom(),
Age = RandomValueGen.GetRandomInt()
};
// ...
}
}
}
Can become:
using static PeanutButter.RandomGenerators.RandomValueGen;
namespace Bovine.Tests
{
[TestFixture]
public class TestCows
{
[Test]
public void Moo_ShouldBeAnnoying()
{
// Arrange
var cow = new Cow()
{
Name = GetRandomString(),
Gender = GetRandom(),
Age = GetRandomInt()
};
// ...
}
}
}
Which is way more readable simply because there's less unnecessary cruft in there. At the point of reading (and writing) the test, source library and class for random values is not only irrelevant and unnecessary -- it's just plain noisy and ugly.
Null conditional operators. Transforming fugly multi-step checks for null into neat code:
if (thing != null &&
thing.OtherThing != null &&
thing.OtherThing.FavoriteChild != null &&
// ... and so on, and so forth, turtles all the way down
// until, eventually
this.OtherThing.FavoriteChild.Dog.Collar.Spike.Metal.Manufacturer.Name != null)
{
return this.OtherThing.FavoriteChild.Dog.Collar.Spike.Metal.Manufacturer.Name;
}
return "Unknown manufacturer";
becomes:
return thing
?.OtherThing
?.FavoriteChild
?.Dog
?.Collar
?.Spike
?.Metal
?.Manufacturer
?.Name
?? "Unknown manufacturer";
and kitties everywhere rejoiced:
String interpolation helps you to turn disaster-prone code like this:
public void PrintHello(string salutation, string firstName, string lastName)
{
Console.WriteLine("Hello, " + salutation + " " + firstName + " of the house " + lastName);
}
or even the less disaster-prone, more efficient, but not that amazing to read:
public void PrintHello(string salutation, string firstName, string lastName)
{
Console.WriteLine(String.Join(" ", new[]
{
"Hello,",
salutation,
firstName,
"of the house",
lastName
}));
}
Into the safe, readable:
public void PrintHello(string salutation, string firstName, string lastName)
{
Console.WriteLine($"Hello, {salutation} {firstName} of the house {lastName}");
}
nameof is also pretty cool, not just for making your constructor null-checks impervious to refactoring:
public class Person
{
public Person(string name)
{
if (name == null) throw new ArgumentNullException(nameof(name));
}
}
(if you're into constructor-time null-checks) but also for using test case sources in NUnit:
[Test, TestCaseSource(nameof(DivideCases))]
public void DivideTest(int n, int d, int q)
{
Assert.AreEqual( q, n / d );
}
static object[] DivideCases =
{
new object[] { 12, 3, 4 },
new object[] { 12, 2, 6 },
new object[] { 12, 4, 3 }
};
C#7
This iteration of the language brings some neat features for making code more succinct and easier to grok at a glance.
Inline declaration of out variables makes using methods with out variables a little prettier. This is not a reason to start using out parameters: I still think that there's normally a better way to do whatever it is that you're trying to achieve with them and use of out and ref parameters is, for me, a warning signal in the code of a place where something unexpected could happen. In particular, using out parameters for methods can make the methods really clunky because you have to set them before returning, making quick returns less elegant. Part of me would have liked them to be set to the default value of the type instead, but I understand the rationale behind the compiler not permitting a return before setting an out parameter: it's far too easy to forget to set it and end up with strange behavior.
I think I can count on one hand the number of times I've written a method with an out or ref parameter and I couldn't even point you at any of them. I totally see the point of ref parameters for high-performance code where it makes sense (like manipulating sound or image data). I just really think that when you use out or ref, you should always ask yourself "is there another way to do this?". Anyway, my opinions on the subject aside, there are times when you have to interact with code not under your control and that code uses out params, for example:
public void PrintNumeric(string input)
{
int result;
if (int.TryParse(input, out result))
{
Console.WriteLine($"Your number is: {input}");
}
else
{
Console.WriteLine($"'{input}' is not a number )':");
}
}
becomes:
public void PrintNumeric(string input)
{
if (int.TryParse(input, out int result))
{
Console.WriteLine($"Your number is: {input}");
}
else
{
Console.WriteLine($"'{input}' is not a number )':");
}
}
It's a subtle change, but if you have to use out parameters enough, it becomes a lot more convenient, less noisy.
Similarly ref locals and returns, where refs are appropriate, can make code much cleaner. The general use-case for these is to return a reference to a non-reference type for performance reasons, for example when you have a large set of ints, bytes, or structs and would like to pass off to another method to find the element you're interested in before modifying it. Instead of the finder returning an index and the outer call re-referencing into the array, the finder method can simply return a reference to the found element so the outer call can do whatever manipulations it needs to. I can see the use case for performance reasons in audio and image processing as well as large sets of structs of data. The example linked above is quite comprehensive and the usage is for more advanced code, so I'm not going to rehash it here.
Tuples have been available in .NET for a long time, but they've been unfortunately cumbersome. The new syntax in C#7 is changing that. Python tuples have always been elegant and now a similar elegance comes to C#:
public static class TippleTuple
{
public static void Main()
{
// good
(int first, int second) = MakeAnIntsTuple();
Console.WriteLine($"first: {first}, second: {second}");
// better
var (flag, message) = MakeAMixedTuple();
Console.WriteLine($"first: {flag}, second: {message}");
}
public static (int first, int second) MakeAnIntsTuple()
{
return (1, 2);
}
public static (bool flag, string name) MakeAMixedTuple()
{
return (true, "Moo");
}
}
Some notes though: the release notes say that you should need to install the
System.ValueTuple nuget package to support this feature in VS2015 and lower, but I found that I needed to install the package on VS2017 too. Resharper still doesn't have a clue about the feature as of 2016.3.2, so usages within the interpolated strings above are highlighted as errors. Still, the program above compiles and is way more elegant than using Tuple<> generic types. It's very clever that language features can be delivered by nuget packages though.
Local functions provide a mechanism for keeping little bits of re-used code local within a method. In the past, I've used a
Func variable where I've had a little piece of re-usable logic:
private NameValueCollection CreateRandomSettings()
{
var result = new NameValueCollection();
Func randInt = () => RandomValueGen.GetRandomInt().ToString();
result["MaxSendAttempts"] = randInt();
result["BackoffIntervalInMinutes"] = randInt();
result["BackoffMultiplier"] = randInt();
result["PurgeMessageWithAgeInDays"] = randInt();
return result;
}
which can become:
public static NameValueCollection GetSomeNamedNumericStrings()
{
var result = new NameValueCollection();
result["MaxSendAttempts"] = RandInt();
result["BackoffIntervalInMinutes"] = RandInt();
result["BackoffMultiplier"] = RandInt();
result["PurgeMessageWithAgeInDays"] = RandInt();
return result;
string RandInt () => GetRandomInt().ToString();
}
Which, I think, is neater. There's also possibly a performance benefit: every time the first implementation of
GetSomeNamedNumericStrings is called, the randInt Func is instantiated, where the second implementation has the function compiled at compile-time, baked into the resultant assembly. Whilst I wouldn't put it past the compiler or JIT to do clever optimisations for the first implementation, I wouldn't expect it.
Throw expressions also offer a neatening to your code:
public int ThrowIfNullResult(Func source)
{
var result = source();
if (result == null)
throw new InvalidOperationException("Null result is not allowed");
return result;
}
can become:
public int ThrowIfNullResult(Func source)
{
return source() ??
throw new InvalidOperationException("Null result is not allowd");
}
So now you can actually write a program where every method has one
return statement and nothing else.Time for the big one: Pattern Matching. This is a language feature native to F# and anyone who has done some C#-F# crossover has whined about how it's missing from C#, including me. Pattern matching not only elevates the rather bland C# switch statement from having constant-only cases to non-constant ones:
public static string InterpretInput(string input)
{
switch (input)
{
case string now when now == DateTime.Now.ToString("yyyy/mm/dd"):
return "Today's date";
default:
return "Unknown input";
}
}
It allows type matching:
public static void Main()
{
var animals = new List()
{
new Dog() {Name = "Rover"},
new Cat() {Name = "Grumplestiltskin"},
new Lizard(),
new MoleRat()
};
foreach (var animal in animals)
PrintAnimalName(animal);
}
public static void PrintAnimalName(Animal animal)
{
switch (animal)
{
case Dog dog:
Console.WriteLine($"{dog.Name} is a {dog.GetType().Name}");
break;
case Cat cat:
Console.WriteLine($"{cat.Name} is a {cat.GetType().Name}");
break;
case Lizard _:
Console.WriteLine("Lizards have no name");
break;
default:
Console.WriteLine($"{animal.GetType().Name} is mystery meat");
break;
}
}
Other new features include generalized async return types, numeric literal syntax improvements and more expression bodied members.
Updates in C# 7.1 and 7.2
The full changes are here (7.1) and here (7.2). Whilst the changes are perhaps not mind-blowing, there are some nice ones:- Async
Main()method now supported - Default literals, eg:
Funcfoo = default; - Inferred tuple element names so you can construct tuples like you would with anonymous objects
-
Reference semantics with value types allow specifying modifiers:
inwhich specifies the argument is passed in by reference but may not be modified by the called coderef readonlywhich specifies the reverse: that the caller may not modify the result set by the calleereadonly structmakes the struct readonly and passed in by refref structwhich specifies that a struct type access managed memory directly and must always be allocated on the stack
- More allowance for
_in numeric literals - The
private protectedmodifier, which is likeprotected internalexcept that it restricts usage to derived types only
Wrapping it up
I guess the point of this illustrated journey is that you really should keep up to date with language features as they emerge:
- Many features save you time, replacing tedious code with shorter, yet more expressive code
- Some features provide a level of safety (eg string interpolations)
- Some features give you more power
So if you're writing C# like it's still .net 2.0, please take a moment to evolve as the language already has.
Monday, 9 January 2017
EF-based testing, with PeanutButter: Shared databases
The
One is the EntityPersistenceTester, which provides a fluent syntax around proving that your data can flow into and out of a database. I'm not about to discuss that in-depth here, but it does allow (neat, imo) code like the following to test POCO persistence:
which prove that an EmailAttachment POCO can be put into, and successfully retrieved from a database, allowing DateTime properties to drift by a second. All very interesting, and Soon To Be Documented™, but not the focus of this entry.
I'd like to introduce a new feature, but to do so, I have to introduce where it can be used. A base class
My focus today is on a feature which can help to eliminate a pain-point I (and others) have experienced with EF testing backed onto a TempDb instance: time to run tests. EF takes a second or two to generate internal information about a database the first time some kind of activity (read/write) to that database is done via an EF DbContext. Not too bad on application startup, but quite annoying if it happens at every test.
However, after some nudging in the right direction by co-worker Mark Whitfeld, I'd like to announce the availability of the
To use, decorate your test fixture:
And run your tests. And see an exception:
So, follow the instructions and add the assembly attribute to the top of your test fixture source file and re-run your tests. Congratulations, you're using a shared instance of a TempDb which will be cleaned up when NUnit is finished, providing, of course, that you don't interrupt the test run yourself (:
PeanutButter.TestUtils.Entity Nuget package provides a few utilities for testing EntityFramework-based code, backed by TempDb instances so you can test that your EF code works as in production instead of relying on (in my experience) flaky substitutions.
One is the EntityPersistenceTester, which provides a fluent syntax around proving that your data can flow into and out of a database. I'm not about to discuss that in-depth here, but it does allow (neat, imo) code like the following to test POCO persistence:
// snarfed from EmailSpooler tests
[Test]
public void EmailAttachment_ShouldBeAbleToPersistAndRecall()
{
EntityPersistenceTester.CreateFor<EmailAttachment>()
.WithContext<EmailContext>()
.WithDbMigrator(MigratorFactory)
.WithSharedDatabase(_sharedTempDb)
.WithAllowedDateTimePropertyDelta(_oneSecond)
.ShouldPersistAndRecall();
}
which prove that an EmailAttachment POCO can be put into, and successfully retrieved from a database, allowing DateTime properties to drift by a second. All very interesting, and Soon To Be Documented™, but not the focus of this entry.
I'd like to introduce a new feature, but to do so, I have to introduce where it can be used. A base class
TestFixtureWithTempDb<T> exists within PeanutButter.TestUtils.Entity. It provides some of the scaffolding required to do more complex testing than just "Can I put a POCO in there?". The generic argument is some implementation of DbContext and it's most useful when testing a repository as it provides a protected GetContext() method which provides a spun-up context of type T, with an underlying temporary database. By default, this is a new, clean database every test, but you can invoke the protected DisableDatabaseRegeneration() method in your test fixture's [OneTimeSetup]-decorated method (or constructor, if you prefer) to make this database live for the lifetime of your test fixture. The base class takes care of disposing of the temporary database when appropriate so you can focus on the interesting stuff: getting your tests (and then code) to work. A full (but simple) example of usage can be found here: https://github.com/fluffynuts/PeanutButter/blob/master/source/Utils/PeanutButter.FluentMigrator.Tests/TestDbMigrationsRunner.csMy focus today is on a feature which can help to eliminate a pain-point I (and others) have experienced with EF testing backed onto a TempDb instance: time to run tests. EF takes a second or two to generate internal information about a database the first time some kind of activity (read/write) to that database is done via an EF DbContext. Not too bad on application startup, but quite annoying if it happens at every test.
DisableDatabaseRegeneration() helps, but still means that each test fixture has a spin-up delay, meaning that when there are a few test fixtures, other developers on your team become less likely to run the entire test suite -- which is bad for everyone.However, after some nudging in the right direction by co-worker Mark Whitfeld, I'd like to announce the availability of the
UseSharedTempDb attribute in PeanutButter.TestUtils.Entity as of version 1.2.120, released today.
To use, decorate your test fixture:
[TestFixture]
[UseSharedTempDb("SomeSharedTempDbIdentifier")]
public class TestCrossFixtureTempDbLifetimeWithInheritence_Part1
: TestFixtureWithTempDb
{
// .. actual tests go here ..
}
And run your tests. And see an exception:
PeanutButter.TestUtils.Entity.SharedTempDbFeatureRequiresAssemblyAttributeException :
The UseSharedTempDb class attribute on TestSomeStuff
requires that assembly SomeProject.Tests have the attribute AllowSharedTempDbInstances.
Try adding the following to the top of a class file:
[assembly: PeanutButter.TestUtils.Entity.Attributes.AllowSharedTempDbInstances]
So, follow the instructions and add the assembly attribute to the top of your test fixture source file and re-run your tests. Congratulations, you're using a shared instance of a TempDb which will be cleaned up when NUnit is finished, providing, of course, that you don't interrupt the test run yourself (:
Tuesday, 6 September 2016
Legacy database testing with PeanutButter
Preamble
Recently, I've been asked about a strategy for testing code which works against an existing database. It's not the first time I've been asked about this -- and it probably won't be the last. I should have listened to Scott Hanselman's advice long ago and blogged it.Better late than never, I suppose. Let's get to it!
The problem
You have some code which hits a MSSQL database to do something. It could be hitting a stored procedure for a report. It could be inserting or updating rows. Whatever the use-case, you'd like to have that code under test, if not just for your own sanity, then because you're about to extend that code and are just plain scared that you're about to break something which already exists. This is a valid reason -- and it drove me to the strategy I'll outline below.You may also simply wish to write your code test-first, but have this great existing legacy mass which you have to work with (and around) and you're just struggling to get the first test out.
Please note: this strategy outlines more integration-style testing than true unit testing. However, I'd rather have an integration test than no test any day. This kind of testing also leads to tests which take a few seconds to run (instead of the preferred milliseconds) -- but I'd rather have slow tests than no tests.
Note that I'm tackling MSSQL because:
- It's a common production database
- If you were dealing with simpler databases like SQLite or SQLCE, you may already have a strategy to deal with this (though PB can still make it easier, so read on)
- I haven't found (yet) a nice way to do temporary in-process MySQL or PostgreSQL. You could use this strategy with Firebird since server and embedded can even use the same file (but not concurrently, of course) -- though currently PeanutButter.TempDb has no baked-in Firebird provider. I guess I should fix that!
So, let's really get to it!
The general idea
Ideally, I'd like to have some kind of test framework which would spin up a temporary database, create all the structures (tables, views) that I need, perhaps the programmability (procedures, functions) I'd like to test (if applicable) and also provides a mechanism for loading in data to test against so that I can write "when {condition} then {expectation}"-style tests.I'd also like that temporary database to die by fire when my tests are done. I don't want to have to clean anything up manually.
Traditionally, running instances of database services have been used for this style of testing -- but that leaves you with a few sticky bits:
- Your development and CI environments have to be set up the same, with credentials baked into the test suite. Or perhaps you can use environment overrides -- still, authorization to the test database has to be a concern
- Test isolation is less than trivial and as the test suite grows, the tests start interacting with each other, even if there is cleanup at some point along the way.
- Getting a new developer on the team is more effort than it really should be, mainly because of (1) above. For the same reasons, developing on a "loaner" or laptop is also more effort than it should be. You can't just "check out and go".
- A shared development/test database which accumulates cruft and potentially causes strange test behaviour when unexpected data is matched by systems under test
- Swapping out the production database layer for something like SQLite. Whilst I really like SQLite, the problem boils down to differences in functionality between SQLite and whatever production database you're using. I've come across far to many recently, in a project where tests are run against SQLite and production code runs against PostgreSQL. I've seen similar issues with testing code targeting SQL Server on SQLCE. Even if you have a fairly beefy ORM layer (EF, NHibernate, etc) to abstract a lot of the database layer away from you, you're going to hit issues. I can think of too many to put them out here -- if you really want a list of the issues I've hit in this kind of scenario, feel free to ask. I've learned enough to feel fear when someone suggests testing on a database other than the engine you're going to deploy on.
Sometimes you have tests which work when production code fails. Sometimes your tests simply can't test what you want to do because the test database engine is "differently capable". - For similar reasons to (2) above, even if you're testing down to the ORM layer, mocked / substituted database contexts (EF) can provide you with tests which work when your production code is going to fail.
PeanutButter to the rescue (:
The strategy that emerged
- Create a temporary database (PeanutButter.TempDb.(some flavor))
- Fortunately, when production code is going to be talking to SQL Server, we can use a LocalDb database for testing -- all the functionality of SQL Server (well, pretty-much all of it, enough for application code -- you'll be missing full text search for example, but the engine is basically the same).
- Create the database structures required (via migrations or scripts)
- Run the tests on the parts of the system to be tested
- Dispose of the temporary database when done, leaving no artifacts and no cruft for another test or test fixture.
public void TestSomeLegacySystem
{
[Test]
public void TestSomeLegacyMethod()
{
// Arrange
using (var db = new TempDbLocalDb())
{
using (var conn = db.CreateConnection())
{
// run in database schema script(s)
// insert any necessary data for the test
}
// Act
// write actual test action here
// Assert
using (var conn = db.CreateConnection())
{
// perform assertions on the database with the new connection
}
} // our TempDb is destroyed here -- nothing to clean up!
}
}
This is very raw, ADO-style code. Of course, it's not too difficult to extrapolate to using an EF context since TempDb exposes both a connection string or you could pass a new connection from CreateConnection to your context's constructor, which would call into DbContext's constructor which can take a DbConnection -- and you would set the second (boolean) parameter based on whether or not you'd like to dispose of the connection yourself.
I did this often enough that it became boring and my laziness kicked in. Ok, so that happened at about the third test...
And so TestFixtureWithTempDb was born: this forms a base class for a test fixture requiring a temporary database of the provided type. It has a protected Configure() method which must be used to instruct the base class how to create the database (with an initial migrator to get the database up to speed), as well as providing hints; for example, by default, a new TempDb is spun up for every test, but if you're willing to take care of cleaning out crufty data after each test (perhaps with a [Teardown]-decorated method), then you can share the database between tests in the fixture for a little performance boost. The boost is more noticable when you also have EF talking to the temporary database as EF will cache model information per database on first access -- so even if you have the same structures in two different iterations, EF will go through the same mapping steps for each test, adding a few seconds (I find typically about 2-5) per test.
Indeed, if you have an EF context, you perhaps want to step one up the chain to EntityPersistenceTestFixtureBase, which is inherited with a generic type that is your DbContext. Your implementation must have a constructor which takes just a DbConnection for this base class to function. If you've created your context from EDMX, you'll have to create another partial class with the same name and the expected constructor signature, passing off to an intermediatary static method which transforms a DbConnection into an EntityConnection; otherwise, just add a constructor.
And this is where my laziness kicks in again: the test fixture for EntityPersistenceTestFixtureBase provides a reasonable example for usage. Note the call to Configure() in the constructor -- you could also have this call in a [OneTimeSetup] method. If you forget it, PeanutButter will bleat at you -- but helpfully, instructing you to Configure() the test fixture before running tests (:
Some interesting points:
- Configure's first parameter is a boolean: when true, the configuration will create ASP.NET tables in the temporary database for you (since it's highly unlikely you'll have them in your own migrations). This is useful only if you're intending to test, for example, an MVC controller which will change behaviour based on the default ASP.NET authentication mechanisms. Mostly, you'll want this to be false.
- The second parameter is a factory function: it takes in a connection string and should emit something which implements IDBMigrationsRunner -- this is an instance of a class with a MigratoToLatest() method which performs whatever is necessary to build the required database structures. You could wrap a FluentMigrator instance, or you can use the handy DbSchemaImporter, given a dump of your database as scripts (without the use statement!) to run in your existing schema. When doing the latter, I simply import said script in a regular old .net resource -- when doing so, you'll get a property on that resource which is a string: the script to run (:
- You can configure a method to run before providing an new EF context -- when configured, this method will be given the EF context which is first created in a test so that it can, for example, clear out old data. Obviously, this only makes sense if you're going full-EF.
- If you have a hefty legacy database, expect some minor issues that you'll have to work through. I've found, for instance, procedures which compiled in SSMS, but not when running in the script for said procedure because it was missing a semi-colon. Don't despair: the effort will be worth it. You can also try only scripting out the bare minimum of the target database that is required for your tests.
Enough blathering!
Ok, this has been a post well-worth a TL;DR. It's the kind of thing would would probably work better as a 15-minute presentation, but I suppose some blog post is better than no post (:Questions? Comments? They're all welcome. If there's something you'd like me to go more in-depth with, shout out -- I can always re-visit this topic (:
Legacy database testing with PeanutButter
Preamble
Recently, I've been asked about a strategy for testing code which works against an existing database. It's not the first time I've been asked about this -- and it probably won't be the last. I should have listened to Scott Hanselman's advice long ago and blogged it.Better late than never, I suppose. Let's get to it!
The problem
You have some code which hits a MSSQL database to do something. It could be hitting a stored procedure for a report. It could be inserting or updating rows. Whatever the use-case, you'd like to have that code under test, if not just for your own sanity, then because you're about to extend that code and are just plain scared that you're about to break something which already exists. This is a valid reason -- and it drove me to the strategy I'll outline below.You may also simply wish to write your code test-first, but have this great existing legacy mass which you have to work with (and around) and you're just struggling to get the first test out.
Please note: this strategy outlines more integration-style testing than true unit testing. However, I'd rather have an integration test than no test any day. This kind of testing also leads to tests which take a few seconds to run (instead of the preferred milliseconds) -- but I'd rather have slow tests than no tests.
Note that I'm tackling MSSQL because:
- It's a common production database
- If you were dealing with simpler databases like SQLite or SQLCE, you may already have a strategy to deal with this (though PB can still make it easier, so read on)
- I haven't found (yet) a nice way to do temporary in-process MySQL or PostgreSQL. You could use this strategy with Firebird since server and embedded can even use the same file (but not concurrently, of course) -- though currently PeanutButter.TempDb has no baked-in Firebird provider. I guess I should fix that!
So, let's really get to it!
The general idea
Ideally, I'd like to have some kind of test framework which would spin up a temporary database, create all the structures (tables, views) that I need, perhaps the programmability (procedures, functions) I'd like to test (if applicable) and also provides a mechanism for loading in data to test against so that I can write "when {condition} then {expectation}"-style tests.I'd also like that temporary database to die by fire when my tests are done. I don't want to have to clean anything up manually.
Traditionally, running instances of database services have been used for this style of testing -- but that leaves you with a few sticky bits:
- Your development and CI environments have to be set up the same, with credentials baked into the test suite. Or perhaps you can use environment overrides -- still, authorization to the test database has to be a concern
- Test isolation is less than trivial and as the test suite grows, the tests start interacting with each other, even if there is cleanup at some point along the way.
- Getting a new developer on the team is more effort than it really should be, mainly because of (1) above. For the same reasons, developing on a "loaner" or laptop is also more effort than it should be. You can't just "check out and go".
- A shared development/test database which accumulates cruft and potentially causes strange test behaviour when unexpected data is matched by systems under test
- Swapping out the production database layer for something like SQLite. Whilst I really like SQLite, the problem boils down to differences in functionality between SQLite and whatever production database you're using. I've come across far to many recently, in a project where tests are run against SQLite and production code runs against PostgreSQL. I've seen similar issues with testing code targeting SQL Server on SQLCE. Even if you have a fairly beefy ORM layer (EF, NHibernate, etc) to abstract a lot of the database layer away from you, you're going to hit issues. I can think of too many to put them out here -- if you really want a list of the issues I've hit in this kind of scenario, feel free to ask. I've learned enough to feel fear when someone suggests testing on a database other than the engine you're going to deploy on.
Sometimes you have tests which work when production code fails. Sometimes your tests simply can't test what you want to do because the test database engine is "differently capable". - For similar reasons to (2) above, even if you're testing down to the ORM layer, mocked / substituted database contexts (EF) can provide you with tests which work when your production code is going to fail.
PeanutButter to the rescue (:
The strategy that emerged
- Create a temporary database (PeanutButter.TempDb.(some flavor))
- Fortunately, when production code is going to be talking to SQL Server, we can use a LocalDb database for testing -- all the functionality of SQL Server (well, pretty-much all of it, enough for application code -- you'll be missing full text search for example, but the engine is basically the same).
- Create the database structures required (via migrations or scripts)
- Run the tests on the parts of the system to be tested
- Dispose of the temporary database when done, leaving no artifacts and no cruft for another test or test fixture.
public void TestSomeLegacySystem
{
[Test]
public void TestSomeLegacyMethod()
{
// Arrange
using (var db = new TempDbLocalDb())
{
using (var conn = db.CreateConnection())
{
// run in database schema script(s)
// insert any necessary data for the test
}
// Act
// write actual test action here
// Assert
using (var conn = db.CreateConnection())
{
// perform assertions on the database with the new connection
}
} // our TempDb is destroyed here -- nothing to clean up!
}
}
This is very raw, ADO-style code. Of course, it's not too difficult to extrapolate to using an EF context since TempDb exposes both a connection string or you could pass a new connection from CreateConnection to your context's constructor, which would call into DbContext's constructor which can take a DbConnection -- and you would set the second (boolean) parameter based on whether or not you'd like to dispose of the connection yourself.
I did this often enough that it became boring and my laziness kicked in. Ok, so that happened at about the third test...
And so TestFixtureWithTempDb was born: this forms a base class for a test fixture requiring a temporary database of the provided type. It has a protected Configure() method which must be used to instruct the base class how to create the database (with an initial migrator to get the database up to speed), as well as providing hints; for example, by default, a new TempDb is spun up for every test, but if you're willing to take care of cleaning out crufty data after each test (perhaps with a [Teardown]-decorated method), then you can share the database between tests in the fixture for a little performance boost. The boost is more noticable when you also have EF talking to the temporary database as EF will cache model information per database on first access -- so even if you have the same structures in two different iterations, EF will go through the same mapping steps for each test, adding a few seconds (I find typically about 2-5) per test.
Indeed, if you have an EF context, you perhaps want to step one up the chain to EntityPersistenceTestFixtureBase, which is inherited with a generic type that is your DbContext. Your implementation must have a constructor which takes just a DbConnection for this base class to function. If you've created your context from EDMX, you'll have to create another partial class with the same name and the expected constructor signature; otherwise, just add a constructor.
And this is where my laziness kicks in again: the test fixture for EntityPersistenceTestFixtureBase provides a reasonable example for usage. Note the call to Configure() in the constructor -- you could also have this call in a [OneTimeSetup] method. If you forget it, PeanutButter will bleat at you -- but helpfully, instructing you to Configure() the test fixture before running tests (:
Some interesting points:
- Configure's first parameter is a boolean: when true, the configuration will create ASP.NET tables in the temporary database for you (since it's highly unlikely you'll have them in your own migrations). This is useful only if you're intending to test, for example, an MVC controller which will change behaviour based on the default ASP.NET authentication mechanisms. Mostly, you'll want this to be false.
- The second parameter is a factory function: it takes in a connection string and should emit something which implements IDBMigrationsRunner -- this is an instance of a class with a MigratoToLatest() method which performs whatever is necessary to build the required database structures. You could wrap a FluentMigrator instance, or you can use the handy DbSchemaImporter, given a dump of your database as scripts (without the use statement!) to run in your existing schema. When doing the latter, I simply import said script in a regular old .net resource -- when doing so, you'll get a property on that resource which is a string: the script to run (:
- You can configure a method to run before providing an new EF context -- when configured, this method will be given the EF context which is first created in a test so that it can, for example, clear out old data. Obviously, this only makes sense if you're going full-EF.
- If you have a hefty legacy database, expect some minor issues that you'll have to work through. I've found, for instance, procedures which compiled in SSMS, but not when running in the script for said procedure because it was missing a semi-colon. Don't despair: the effort will be worth it. You can also try only scripting out the bare minimum of the target database that is required for your tests.
Enough blathering!
Ok, this has been a post well-worth a TL;DR. It's the kind of thing would would probably work better as a 15-minute presentation, but I suppose some blog post is better than no post (:Questions? Comments? They're all welcome. If there's something you'd like me to go more in-depth with, shout out -- I can always re-visit this topic (:
Thursday, 17 March 2016
Update PeanutButter - now with less pain!
(and possibly more fibre)
When I change one Nuget library, I update all Nuget packages to avoid any confusion about which version of what plays nicely with the other - to the point where I've even made each package which depends on another PeanutButter package depend on the same version as itself. I wanted an easy way to update all PeanutButter packages since I release quite often - indeed, pretty-much whenever I add any functionality.
An approach to this problem might be running
And now I (and anyone else using PeanutButter) have it.
Thanks to an excellent tutorial by Phil Haack on his blog You've Been Haacked, I've added the command via a module loaded from the PeanutButter.Utils Nuget package init script. PeanutButter.Utils is one of the "lowest-down" packages, so chances are very good that if you're using any of the others, you're using PeanutButter.Utils. The change is available from version 1.2.15 and the easiest way to take advantage of it would be to update one of your projects to use the latest PeanutButter.Utils and then use
Happy hacking (:
update-package from the package manager console. Whilst I'm a fan of keeping libraries up to date, sometimes you can get unexpected consequences from this action such as breaking a site depending on an older version of ASP.NET MVC. I have no control over those other packages -- but I do control PeanutButter, so what I need is something more like update-peanutbutter.And now I (and anyone else using PeanutButter) have it.
Thanks to an excellent tutorial by Phil Haack on his blog You've Been Haacked, I've added the command via a module loaded from the PeanutButter.Utils Nuget package init script. PeanutButter.Utils is one of the "lowest-down" packages, so chances are very good that if you're using any of the others, you're using PeanutButter.Utils. The change is available from version 1.2.15 and the easiest way to take advantage of it would be to update one of your projects to use the latest PeanutButter.Utils and then use
update-peanutbutter from the package manager console to update all the other projects in your solution.Happy hacking (:
Thursday, 19 March 2015
Wednesday, 21 January 2015
EntityFramework: getting to the bottom of a murky error
In doing some work using EF which talks to existing databases on MSSQL in production and SQLCE for testing (check out https://github.com/fluffynuts/OrmNomNom.git for an example of using SQLCE and a transient database to do Entity testing -- there's also a bit in there about using NHibernate too), I used the Entity POCO generation tools to quickly spew out the classes that should work when talking to the existing database. I also wrote some FluentMigrator migrations so I could get a temporary database up to speed and set off to use that method to test some database interaction, when:
Boom! InvalidOperationException ("Sequence contains no elements").
Now the befuddling aspect of this exception is where it was thrown. I basically have some code like:
Looking at the stack trace, I see the last two frames are:
Then the culprit leaps out. The generator tool has annotated a
I thought I'd just make a record of this here as I found nothing particularly useful through much googling and eventually found and resolved the problem based on a hunch. Perhaps this post can save someone else a little mission. Of course the
Boom! InvalidOperationException ("Sequence contains no elements").
Now the befuddling aspect of this exception is where it was thrown. I basically have some code like:
using (var ctx = new DragonContext(db.CreateConnection()))
{
var unit = new SupplierPriceList()
{
SupplierPriceListID = "test",
SupplierID = "foo",
MaterialID = "foo",
CompanyCode = "foo",
Date = DateTime.Now,
SupplierPrice = 1.23
};
ctx.SupplierPriceLists.Add(unit);
ctx.SaveChanges();
}
And the exception is thrown at the point of the call to Add(). Which doesn't seem to make a lot of sense as the temp database being used has no rows in any tables, so of course SupplierPriceLists is empty and one might expect that to be the source of the exception, considering the text, but it really shouldn't matter. Tables are empty all the time and we can add rows to them. Still, the error befuddles...Looking at the stack trace, I see the last two frames are:
at System.Linq.Enumerable.Single[TSource](IEnumerable`1 source, Func`2 predicate) at System.Data.Entity.Utilities.DbProviderManifestExtensions.GetStoreTypeFromName(DbProviderManifest providerManifest, String name)Ok, so the last frame explains the raw reason for the exception: something is doing a Single() call on an empty collection. But what? The call one up looks interesting:
GetStoreTypeFromName seems to suggest that Entity is doing a lookup to find out how it should store the data for that property. So, on a hunch, comment out all properties except the implied key field (SupplierPriceListID) and the Add() call works. Hm. Perhaps SQLCE doesn't like the double?fields? Nope -- uncomment them and things still work.
Then the culprit leaps out. The generator tool has annotated a
DateTime? field with:
[Column(TypeName = "date")]Removing the annotation causes the error to subside -- and the
SaveChanges() call on the context passes, on both MSSQL and SQLCEI thought I'd just make a record of this here as I found nothing particularly useful through much googling and eventually found and resolved the problem based on a hunch. Perhaps this post can save someone else a little mission. Of course the
DateTime? value is truncated for the Date field, but hey, that's what the client originally intended...
Subscribe to:
Posts (Atom)
What's new in PeanutButter?
Retrieving the post... Please hold. If the post doesn't load properly, you can check it out here: https://github.com/fluffynuts/blog/...
-
So here's a neat little thing that I learned tonight: how to run a script after a specific package is installed or updated. Why? Bec...
-
There was a time when INI files ruled the world of configuration. Since then, we've been told on numerous occasions by many people that ...
-
Introducing... Peanut Butter! I've been writing code for about 15 years now. It's not really that long, considering the amount of...








