Tuesday 6 September 2016

Legacy database testing with PeanutButter

Preamble

Recently, I've been asked about a strategy for testing code which works against an existing database. It's not the first time I've been asked about this -- and it probably won't be the last. I should have listened to Scott Hanselman's advice long ago and blogged it.

Better late than never, I suppose. Let's get to it!

The problem

You have some code which hits a MSSQL database to do something. It could be hitting a stored procedure for a report. It could be inserting or updating rows. Whatever the use-case, you'd like to have that code under test, if not just for your own sanity, then because you're about to extend that code and are just plain scared that you're about to break something which already exists. This is a valid reason -- and it drove me to the strategy I'll outline below.

You may also simply wish to write your code test-first, but have this great existing legacy mass which you have to work with (and around) and you're just struggling to get the first test out.

Please note: this strategy outlines more integration-style testing than true unit testing. However, I'd rather have an integration test than no test any day. This kind of testing also leads to tests which take a few seconds to run (instead of the preferred milliseconds) -- but I'd rather have slow tests than no tests.

Note that I'm tackling MSSQL because:
  1. It's a common production database
  2. If you were dealing with simpler databases like SQLite or SQLCE, you may already have a strategy to deal with this (though PB can still make it easier, so read on)
  3. I haven't found (yet) a nice way to do temporary in-process MySQL or PostgreSQL. You could use this strategy with Firebird since server and embedded can even use the same file (but not concurrently, of course) -- though currently PeanutButter.TempDb has no baked-in Firebird provider. I guess I should fix that!

So, let's really get to it!

The general idea

Ideally, I'd like to have some kind of test framework which would spin up a temporary database, create all the structures (tables, views) that I need, perhaps the programmability (procedures, functions) I'd like to test (if applicable) and also provides a mechanism for loading in data to test against so that I can write "when {condition} then {expectation}"-style tests.

I'd also like that temporary database to die by fire when my tests are done. I don't want to have to clean anything up manually.

Traditionally, running instances of database services have been used for this style of testing -- but that leaves you with a few sticky bits:
  1. Your development and CI environments have to be set up the same, with credentials baked into the test suite. Or perhaps you can use environment overrides -- still, authorization to the test database has to be a concern
  2. Test isolation is less than trivial and as the test suite grows, the tests start interacting with each other, even if there is cleanup at some point along the way.
  3. Getting a new developer on the team is more effort than it really should be, mainly because of (1) above. For the same reasons, developing on a "loaner" or laptop is also more effort than it should be. You can't just "check out and go".
Some prior strategies exist to cope with this, but they are unsatisfactory:
  1. A shared development/test database which accumulates cruft and potentially causes strange test behaviour when unexpected data is matched by systems under test
  2. Swapping out the production database layer for something like SQLite. Whilst I really like SQLite, the problem boils down to differences in functionality between SQLite and whatever production database you're using. I've come across far to many recently, in a project where tests are run against SQLite and production code runs against PostgreSQL. I've seen similar issues with testing code targeting SQL Server on SQLCE. Even if you have a fairly beefy ORM layer (EF, NHibernate, etc) to abstract a lot of the database layer away from you, you're going to hit issues. I can think of too many to put them out here -- if you really want a list of the issues I've hit in this kind of scenario, feel free to ask. I've learned enough to feel fear when someone suggests testing on a database other than the engine you're going to deploy on.
    Sometimes you have tests which work when production code fails. Sometimes your tests simply can't test what you want to do because the test database engine is "differently capable".
  3. For similar reasons to (2) above, even if you're testing down to the ORM layer, mocked / substituted database contexts (EF) can provide you with tests which work when your production code is going to fail.
So we'd like to test against "real iron", but we'd like that iron to be transient.

PeanutButter to the rescue (:

The strategy that emerged

  1. Create a temporary database (PeanutButter.TempDb.(some flavor))
    1. Fortunately, when production code is going to be talking to SQL Server, we can use a LocalDb database for testing -- all the functionality of SQL Server (well, pretty-much all of it, enough for application code -- you'll be missing full text search for example, but the engine is basically the same).
  2. Create the database structures required (via migrations or scripts)
  3. Run the tests on the parts of the system to be tested
  4. Dispose of the temporary database when done, leaving no artifacts and no cruft for another test or test fixture.


public void TestSomeLegacySystem
{
  [Test]
  public void TestSomeLegacyMethod()
  {
    // Arrange
    using (var db = new TempDbLocalDb())
    { 
      using (var conn = db.CreateConnection())
      {
        // run in database schema script(s)
        // insert any necessary data for the test 
      }
      // Act
      // write actual test action here
      // Assert 
      using (var conn = db.CreateConnection())
      {
        // perform assertions on the database with the new connection 
      } 
    } // our TempDb is destroyed here -- nothing to clean up!
  } 
}

This is very raw, ADO-style code. Of course, it's not too difficult to extrapolate to using an EF context since TempDb exposes both a connection string or you could pass a new connection from CreateConnection to your context's constructor, which would call into DbContext's constructor which can take a DbConnection -- and you would set the second (boolean) parameter based on whether or not you'd like to dispose of the connection yourself. 

I did this often enough that it became boring and my laziness kicked in. Ok, so that happened at about the third test...

And so TestFixtureWithTempDb was born: this forms a base class for a test fixture requiring a temporary database of the provided type. It has a protected Configure() method which must be used to instruct the base class how to create the database (with an initial migrator to get the database up to speed), as well as providing hints; for example, by default, a new TempDb is spun up for every test, but if you're willing to take care of cleaning out crufty data after each test (perhaps with a [Teardown]-decorated method), then you can share the database between tests in the fixture for a little performance boost. The boost is more noticable when you also have EF talking to the temporary database as EF will cache model information per database on first access -- so even if you have the same structures in two different iterations, EF will go through the same mapping steps for each test, adding a few seconds (I find typically about 2-5) per test.

Indeed, if you have an EF context, you perhaps want to step one up the chain to EntityPersistenceTestFixtureBase, which is inherited with a generic type that is your DbContext. Your implementation must have a constructor which takes just a DbConnection for this base class to function. If you've created your context from EDMX, you'll have to create another partial class with the same name and the expected constructor signature, passing off to an intermediatary static method which transforms a DbConnection into an EntityConnection; otherwise, just add a constructor.

And this is where my laziness kicks in again: the test fixture for EntityPersistenceTestFixtureBase provides a reasonable example for usage. Note the call to Configure() in the constructor -- you could also have this call in a [OneTimeSetup] method. If you forget it, PeanutButter will bleat at you -- but helpfully, instructing you to Configure() the test fixture before running tests (:

Some interesting points:
  1. Configure's first parameter is a boolean: when true, the configuration will create ASP.NET tables in the temporary database for you (since it's highly unlikely you'll have them in your own migrations). This is useful only if you're intending to test, for example, an MVC controller which will change behaviour based on the default ASP.NET authentication mechanisms. Mostly, you'll want this to be false.
  2. The second parameter is a factory function: it takes in a connection string and should emit something which implements IDBMigrationsRunner -- this is an instance of a class with a MigratoToLatest() method which performs whatever is necessary to build the required database structures. You could wrap a FluentMigrator instance, or you can use the handy DbSchemaImporter, given a dump of your database as scripts (without the use statement!) to run in your existing schema. When doing the latter, I simply import said script in a regular old .net resource -- when doing so, you'll get a property on that resource which is a string: the script to run (:
  3. You can configure a method to run before providing an new EF context -- when configured, this method will be given the EF context which is first created in a test so that it can, for example, clear out old data. Obviously, this only makes sense if you're going full-EF.
  4. If you have a hefty legacy database, expect some minor issues that you'll have to work through. I've found, for instance, procedures which compiled in SSMS, but not when running in the script for said procedure because it was missing a semi-colon. Don't despair: the effort will be worth it. You can also try only scripting out the bare minimum of the target database that is required for your tests.

Enough blathering!

Ok, this has been a post well-worth a TL;DR. It's the kind of thing would would probably work better as a 15-minute presentation, but I suppose some blog post is better than no post (:

Questions? Comments? They're all welcome. If there's something you'd like me to go more in-depth with, shout out -- I can always re-visit this topic (:

Legacy database testing with PeanutButter

Preamble

Recently, I've been asked about a strategy for testing code which works against an existing database. It's not the first time I've been asked about this -- and it probably won't be the last. I should have listened to Scott Hanselman's advice long ago and blogged it.

Better late than never, I suppose. Let's get to it!

The problem

You have some code which hits a MSSQL database to do something. It could be hitting a stored procedure for a report. It could be inserting or updating rows. Whatever the use-case, you'd like to have that code under test, if not just for your own sanity, then because you're about to extend that code and are just plain scared that you're about to break something which already exists. This is a valid reason -- and it drove me to the strategy I'll outline below.

You may also simply wish to write your code test-first, but have this great existing legacy mass which you have to work with (and around) and you're just struggling to get the first test out.

Please note: this strategy outlines more integration-style testing than true unit testing. However, I'd rather have an integration test than no test any day. This kind of testing also leads to tests which take a few seconds to run (instead of the preferred milliseconds) -- but I'd rather have slow tests than no tests.

Note that I'm tackling MSSQL because:
  1. It's a common production database
  2. If you were dealing with simpler databases like SQLite or SQLCE, you may already have a strategy to deal with this (though PB can still make it easier, so read on)
  3. I haven't found (yet) a nice way to do temporary in-process MySQL or PostgreSQL. You could use this strategy with Firebird since server and embedded can even use the same file (but not concurrently, of course) -- though currently PeanutButter.TempDb has no baked-in Firebird provider. I guess I should fix that!

So, let's really get to it!

The general idea

Ideally, I'd like to have some kind of test framework which would spin up a temporary database, create all the structures (tables, views) that I need, perhaps the programmability (procedures, functions) I'd like to test (if applicable) and also provides a mechanism for loading in data to test against so that I can write "when {condition} then {expectation}"-style tests.

I'd also like that temporary database to die by fire when my tests are done. I don't want to have to clean anything up manually.

Traditionally, running instances of database services have been used for this style of testing -- but that leaves you with a few sticky bits:
  1. Your development and CI environments have to be set up the same, with credentials baked into the test suite. Or perhaps you can use environment overrides -- still, authorization to the test database has to be a concern
  2. Test isolation is less than trivial and as the test suite grows, the tests start interacting with each other, even if there is cleanup at some point along the way.
  3. Getting a new developer on the team is more effort than it really should be, mainly because of (1) above. For the same reasons, developing on a "loaner" or laptop is also more effort than it should be. You can't just "check out and go".
Some prior strategies exist to cope with this, but they are unsatisfactory:
  1. A shared development/test database which accumulates cruft and potentially causes strange test behaviour when unexpected data is matched by systems under test
  2. Swapping out the production database layer for something like SQLite. Whilst I really like SQLite, the problem boils down to differences in functionality between SQLite and whatever production database you're using. I've come across far to many recently, in a project where tests are run against SQLite and production code runs against PostgreSQL. I've seen similar issues with testing code targeting SQL Server on SQLCE. Even if you have a fairly beefy ORM layer (EF, NHibernate, etc) to abstract a lot of the database layer away from you, you're going to hit issues. I can think of too many to put them out here -- if you really want a list of the issues I've hit in this kind of scenario, feel free to ask. I've learned enough to feel fear when someone suggests testing on a database other than the engine you're going to deploy on.
    Sometimes you have tests which work when production code fails. Sometimes your tests simply can't test what you want to do because the test database engine is "differently capable".
  3. For similar reasons to (2) above, even if you're testing down to the ORM layer, mocked / substituted database contexts (EF) can provide you with tests which work when your production code is going to fail.
So we'd like to test against "real iron", but we'd like that iron to be transient.

PeanutButter to the rescue (:

The strategy that emerged

  1. Create a temporary database (PeanutButter.TempDb.(some flavor))
    1. Fortunately, when production code is going to be talking to SQL Server, we can use a LocalDb database for testing -- all the functionality of SQL Server (well, pretty-much all of it, enough for application code -- you'll be missing full text search for example, but the engine is basically the same).
  2. Create the database structures required (via migrations or scripts)
  3. Run the tests on the parts of the system to be tested
  4. Dispose of the temporary database when done, leaving no artifacts and no cruft for another test or test fixture.


public void TestSomeLegacySystem
{
  [Test]
  public void TestSomeLegacyMethod()
  {
    // Arrange
    using (var db = new TempDbLocalDb())
    { 
      using (var conn = db.CreateConnection())
      {
        // run in database schema script(s)
        // insert any necessary data for the test 
      }
      // Act
      // write actual test action here
      // Assert 
      using (var conn = db.CreateConnection())
      {
        // perform assertions on the database with the new connection 
      } 
    } // our TempDb is destroyed here -- nothing to clean up!
  } 
}

This is very raw, ADO-style code. Of course, it's not too difficult to extrapolate to using an EF context since TempDb exposes both a connection string or you could pass a new connection from CreateConnection to your context's constructor, which would call into DbContext's constructor which can take a DbConnection -- and you would set the second (boolean) parameter based on whether or not you'd like to dispose of the connection yourself. 

I did this often enough that it became boring and my laziness kicked in. Ok, so that happened at about the third test...

And so TestFixtureWithTempDb was born: this forms a base class for a test fixture requiring a temporary database of the provided type. It has a protected Configure() method which must be used to instruct the base class how to create the database (with an initial migrator to get the database up to speed), as well as providing hints; for example, by default, a new TempDb is spun up for every test, but if you're willing to take care of cleaning out crufty data after each test (perhaps with a [Teardown]-decorated method), then you can share the database between tests in the fixture for a little performance boost. The boost is more noticable when you also have EF talking to the temporary database as EF will cache model information per database on first access -- so even if you have the same structures in two different iterations, EF will go through the same mapping steps for each test, adding a few seconds (I find typically about 2-5) per test.

Indeed, if you have an EF context, you perhaps want to step one up the chain to EntityPersistenceTestFixtureBase, which is inherited with a generic type that is your DbContext. Your implementation must have a constructor which takes just a DbConnection for this base class to function. If you've created your context from EDMX, you'll have to create another partial class with the same name and the expected constructor signature; otherwise, just add a constructor.

And this is where my laziness kicks in again: the test fixture for EntityPersistenceTestFixtureBase provides a reasonable example for usage. Note the call to Configure() in the constructor -- you could also have this call in a [OneTimeSetup] method. If you forget it, PeanutButter will bleat at you -- but helpfully, instructing you to Configure() the test fixture before running tests (:

Some interesting points:
  1. Configure's first parameter is a boolean: when true, the configuration will create ASP.NET tables in the temporary database for you (since it's highly unlikely you'll have them in your own migrations). This is useful only if you're intending to test, for example, an MVC controller which will change behaviour based on the default ASP.NET authentication mechanisms. Mostly, you'll want this to be false.
  2. The second parameter is a factory function: it takes in a connection string and should emit something which implements IDBMigrationsRunner -- this is an instance of a class with a MigratoToLatest() method which performs whatever is necessary to build the required database structures. You could wrap a FluentMigrator instance, or you can use the handy DbSchemaImporter, given a dump of your database as scripts (without the use statement!) to run in your existing schema. When doing the latter, I simply import said script in a regular old .net resource -- when doing so, you'll get a property on that resource which is a string: the script to run (:
  3. You can configure a method to run before providing an new EF context -- when configured, this method will be given the EF context which is first created in a test so that it can, for example, clear out old data. Obviously, this only makes sense if you're going full-EF.
  4. If you have a hefty legacy database, expect some minor issues that you'll have to work through. I've found, for instance, procedures which compiled in SSMS, but not when running in the script for said procedure because it was missing a semi-colon. Don't despair: the effort will be worth it. You can also try only scripting out the bare minimum of the target database that is required for your tests.

Enough blathering!

Ok, this has been a post well-worth a TL;DR. It's the kind of thing would would probably work better as a 15-minute presentation, but I suppose some blog post is better than no post (:

Questions? Comments? They're all welcome. If there's something you'd like me to go more in-depth with, shout out -- I can always re-visit this topic (:

Friday 26 August 2016

The story of M&m

(or, as I like to think of it, "how I wasted over an hour trying to figure out what was breaking the code I was learning, only to discover a convoluted chain of fail which ends (imo) at some code doing something other than what its name suggests because someone had an opinionated idea that it should)


Brace yourself. The rant is strong with this post. There may be a useful nugget in there, but I can't guarantee anything.

One of the greatest cores of our culture at Chillisoft, is a commitment to learning. Learning new technologies, new techniques, new life skills. Part of that commitment is our "Deliberate Practice" time, a time set aside on Fridays from 11h00 to COB in which we put aside production work and focus on learning new things.

The topics vary -- and sometimes the aim is just to experiment with technologies and figure out what may be useful. The current topic is that of TypeScript and Angular 2 and the current focus point with those two topics is for everyone to persevere in producing their own "TODO" app with TypeScript and Angular 2, hopefully from scratch, but using any available online resources.

One of those resources is the Tour of Heroes tutorial on angular.io. That tutorial goes through a lot of the basics of Angular, but suggests starting with the 5 Minute Quickstart tutorial which uses SystemJS for building. Personally, I have a few gripes about the output from that method, mostly that you're expected to reference scripts within node_modules (so ship your entire node_modules folder with your app? Hand-pick the bits you need? Post-process with another tool?) but also because I didn't see a way to process in other resources, like templates -- which I most definitely want to keep out of the code, for the day when I'm fortunate enough to once again have an html-capable designer on my team.

I could well be missing something fundamental with SystemJS -- feel free to drop comments if you can set me straight!

Having been recently exposed to Webpack and both wanting to learn more about that technology whilst avoiding the things I didn't like about what I'd seen of SystemJS, I was excited to see that the kind Angular tutorial fairies had provided a Webpack variant of the 5 Minute Quickstart tutorial.

Naturally, I dove right in.

I'd like to stop here for a moment and talk about tutorials.

So often, I've come across tutorials and other learning material which follows this kind of instruction pattern:
  1. Type something into an IDE or terminal. No explanation, just type it!
  2. Copy this file somewhere
  3. Run this command
  4. Copy these seven other files somewhere else
  5. Run this other command
  6. Congratulations! You are a 1337 developer!
Sorry to burst your bubble, but you're not. You can follow instructions. Bully for you. This is why I stopped at the end of chapter 3 of my MCSD materials, well over a decade ago: I got to a point where I'd followed all of the instructions and had been declared a proficient C++ developer by the materials -- but I had to disagree. I couldn't even repeat what I had done, let alone solve a brand new problem.

Without understanding the simple principles, a lot seems like magick and we end up copying about revered configuration files from project to project, oblivious of the cruft they bring with them.

When you are going through a tutorial (like the ones linked above) and you're instructed to copy some files into your source folder the very least you can do is type them out yourself. At least then you're forced to properly read them and start asking questions. Better, you might start wondering where these files come from and how you're going to create them next time.
In all of these tutorials, I see no mention of the following commands -- but I see their artifacts as "copy-and-paste-into-your-project-folder" instructions:
  • npm init
  • tsc --init
  • typings init
  • karma init
(When tooling doesn't have an init function, tutorials should point to documentation references like the Webpack configuration documentation. And perhaps walk you through some of the basics.)

The result is that juniors learning these techs have no idea how their fundamentals work -- and are unable to stand on their own two feet. I see a similar philosophy coming out of colleges, where graduates can program in C# but have no idea how a program is started by the OS, what JIT is, how to use a REPL -- simple things that really should be the bedrock of understanding.

As an analogy, I'm not saying that a professional car drivers should understand every part of the combustion engine -- but at least knowing about the roles played by pistons, clutch, gears, fuel, braking systems, etc.

But I digress -- back to the story of m&M... or was that M&m...

So between the Tour of Heroes and the 5 Minute Quick Start (and the Webpack variant). I finally get something up and building. I'm feeling all proud of myself until I try to do my first two-way binding, using the Angular2 FormsModule. I read that I can use a template string or a templateUrl to a file of my choosing -- and I consider the latter a better idea because of separation of concerns (and the whole designer theme above). So I point my templateUrl at a file in the same folder as the component, fire up the webpack dev server, bust out a browser and get an error like so:

Can't bind to 'ngmodel' since it isn't a known property of 'input'

Note that my template has the following code only:


<main>
  <h1>TODO</h1>
<input type="text" [(ngModel)]="newTodoText" (keyup.enter)="addTodo()" />
</main>

Nothing particularly exciting, elegant or informative in there and it takes me a while to realise that I've specified ngModel but the error is complaining about ngmodel.

I suspect that Angular is being case-sensitive about these "attributes" and my camelCasing is being clubbed by something. The Angular devs have stirred the pot a little (apparently) with this decision, but I'm not to burned up about it -- it's a template, to be pre-processed: it doesn't have to be conformant html5

Now, since I'm loading a template by url, I'm using the Webpack html loader -- that's what the Angular tutorial suggests, and it makes sense to me. I get the brainwave to use a string instead of the templateUrl -- just stuff the above code into a template attribute and try again... Success! So my suspicion are confirmed:
  1. Angular does care about case
  2. Somehow my ngModel is getting through to the code as ngmodel when the template is loaded from a file and that's causing the issue.
Ok, try another loader: raw. Success! It works! So my second suspicion is confirmed: it's the html-loader.  But this is supposed to be the "correct" way to load html -- the internet says so, and the internet is never wrong!

I suspect that html-loader is lower-casing attributes for some arbitrary reason -- but no documentation at https://www.npmjs.com/package/html-loader gives me any clues -- except that I see that html-loader is minifying by default, using https://www.npmjs.com/package/html-minifier. Ok, so perhaps minification could be considered to be part of loading, especially for packing. Fair enough.

I discover that html-minifier has lots of options with one of them being caseSensitive. Better than that, this option is disabled by default.

And this is where I start losing the plot.

Remember, folks, this package is a minifier. It has one job: make what comes in smaller and push it back out again. Unless the developers are privy to some fantastic new knowledge about bytes, I'm pretty sure that lower-casing text does not save space. Technically, proper html should have lower-cased attributes -- but that's really an opinion which belongs in another plugin (perhaps a tidy plugin?), not something which has a job of minifying. Just my opionion, mind you.

Just as confusing is that html-loader is using this extra plugin, so the only point of configuration for the minifier is html-loader, which can be configured with a query-string like syntax (blech -- someone else has to grok out all the options that I put in there? horrid!), or via a property on the exported configuration from your webpack.conf.js. Except that that mechanism only works for a select few configuration options. And html-loader doesn't have comprehensive documentation, so I guess I'll have to clone their code and go trawling through it -- a practice which is becoming annoyingly common for me.

(Side-note: I know that I'm not the best documenter either -- PeanutButter could do with online documentation -- I tend to answer questions rather, and that's a bad habit )':  The only defense I have is that at least I don't mislead anyone with partial documentation. You knew you would have to check out the code the moment you stepped through the door.)

Anyway, after trudging through code a bit and finding this github issue of interest, I eventually figure out that the magick to fix my problems can appear in my webpack.config.js, like so:


module.exports = {
...
      {
        test: /\.html$/,
        loader: 'html?caseSensitive'
      },
...
}

That little ?caseSensitive and the resultant m/M debacle took me an hour to figure out >_<

And at the end of the day, I'm still of the opinion that having a minifier change the case of code it outputs is a silly idea. Haters gonna hate, but I really don't see the point. At the very least, it produces "WAT?!" moments like the one above. Worse than that, it promotes the idea of writing sloppy code and getting a tool to fix it (for real html attributes, which should jolly-well be lower-case when the developer writes them!)

Tuesday 12 July 2016

Learn another language

A recent "Ask Slashdot" post posed the question about how often other developers switch and / or choose languages to develop in. One answer particularly caught my eye: the author proposed that there was a limit to the usefulness of learning a new language, particularly because a developer would probably stop using some or other language at some point and forget the language.

I have a different take:

Learning a new language allows you to see new ways of doing things (assuming you learn some of the ins and outs). Every language designed was made because someone couldn't do what they wanted in an existing language or the world it "lives in" (VM for interpreted languages, base library / stdlib for compiled-to-bytecode ones). Ok, so every serious language (leaving out the ones which are obviously just there because some people have a sense of humor (eg Shakespeare) and some people have a twisted sense of humor (Perl... ehm, I mean, whitespace!, yes, whitespace! And brainfuck! (: ).

The point is this: every time I've learned a new language, it's shaped how I write the languages I already know. F# changed how I write C#. C# changed how I write Javascript, and so on and so forth, turtles all the way down. In addition, it seems that the more languages I'm exposed to, the more abstractly I can think about concepts when coding -- much as I understand that learning more spoken languages trains the brain to think in a language-agnostic manner.

Sometimes learning a new language makes me appreciate the features I already know in other languages (I'm looking at you, R), sometimes learning a new language teaches you new respect for formatting and "just getting along" (Python). Sometimes a new language teaches you about constraints (Lua, particularly embedded), sometimes about being mad careful about memory (C/C++), sometimes it's about what can be cobbled together with existing pieces (ie, quick wins) -- bash / zsh / sh / batch  / PowerShell.

Sometimes learning a new language gives you insight into a technology stack which seemed mystically complex before learning it (PHP, classic ASP with VBScript/JScript; the web was all dark magick before I worked in it and classic ASP and PHP were where I cut my teeth -- not to mention, of course, Javascript for the front-end -- but the other two made me learn about HTTP and how the whole web stack works at a lower level than a lot of people can be bothered to understand these days).

Sometimes a language teaches you to think a level higher than the language itself -- Javascript, SQL, PHP -- anything with an eval() or equivalent and / or where you can inspect / reflect at runtime (Javascript, anything .NET, even, to a degree, COM-based programming. These teach you to think "meta" about what you're doing, instead of just cranking out another loop; also understanding the ins and outs of Javascript lets you subvert Typescript when it gets in the way).

Sometimes it's about understanding the lowest-level stuff (asm, IL), sometimes about dealing in sets instead of procedures (SQL, R). Sometimes it's a reminder that more words may initially seem more readable, but just become a PITA (VB6 / VB.net -- and they are well different) or that even if a language seems like it's "for dummies", you can still accomplish the same thing as more "respected" languages (VB6 / VB.net again).

Sometimes learning a language makes you learn how to listen very carefully to its proponents, eg Tcl: "everything is a list" and "everything is a command" -- when that truly sinks in, there's a white-hot moment of clarity; it takes a day or two, but when it properly sinks in...boom. After that lesson, I give a little more thought to seemingly esoteric observations about a language.

Sometimes a language teaches you about the UNIX philosophy -- doing one thing and doing it well, then chaining together with other utilities which also do one thing well (F#, shell / batch languages) -- which shaped how I wrote "more capable" languages, producing smaller, simpler units instead of monolithic ones.

Sometimes a "language" is just descriptive (document languages: HTML, XML, JSON, YAML, CSS) but sometimes they can be bent (XSLT) or bent via pre-processing (SASS/LESS). Sometimes a language teaches you to embrace the warts for all the cool stuff (Javascript) and sometimes a language tells you about how it was designed when you have seen some of the others that inspired it (Ruby) -- which is interesting in and of itself.

Sometimes a language seems more long-winded than it actually is and pleasantly surprises you (Java, particularly v7 on Android had some nifty tricks up its sleeve). Sometimes hacking something in a language gives you enough chills to not want to go back there (for me: Erlang, though I'm open to trying again) -- and even that is a positive learning experience.

Sometimes learning a language is just a lesson in history, of how things were once done and how, despite the forces against them, people can leave a mark on technology that is visible from space. I'm looking at you, Grace Hopper, and your creation: COBOL. There's a language structured around making your code look like something you'd find in a ring binder somewhere. And much of the world's financial institutions ran on that for, well, decades.

There's not one of them I'd wish to unlearn to get the time back. Indeed, I'm trying to get together the self-motivation to learn either Rust or Go, perhaps Haskell or Scala. Perhaps even Perl 6 (Larry promises that it's all better now). No particular reason other than perhaps it teaches me how to use my current tools better. Or perhaps I find another language which suits particular problems very well.

And yes, apart from whitespace and brainfuck (and, to be brutally honest, Erlang, which I hacked a little, but really, it doesn't count), which I've only read about, I've written code in all of the above. Some languages have been home for a long time -- literally tens of thousands of lines. Some I've dabbled in and moved on when I found something more suited to my immediate needs.

By "dabble", I mean only a few hundred lines or perhaps enough to understand how to port away from it (Perl, which never really managed to gain my approval, even though it's powerful stuff, but Perl seems to be a training ground for how to write line noise). But all have taught me something; all have been part of shaping my code as it dribbles out in the present, even if that lesson is how not to do it.

If you're a developer and you only know one language, you're doing yourself (and your team-mates and employer) a great disservice. Go out there and learn one of the wonderful, free languages that are available. If you don't know where to start, then pick the first one from this list that you don't already know:
  • Python
  • F#
  • Javascript (yes, there are people who don't know Javascript! Don't judge!)
Or any other language significantly different from the one you program in most -- find something that has a reasonably-sized community, many libraries (so you can actually get something done) and free tooling. There are heaps! Happy hunting (:

Thursday 17 March 2016

Update PeanutButter - now with less pain!


 (and possibly more fibre)

Necessity Laziness is the mother of all invention. It's the reason PeanutButter (https://github.com/fluffynuts/PeanutButter) even exists and is on Nuget. But as PeanutButter has expanded in its modular fashion, one thing has bugged me: updating.

When I change one Nuget library, I update all Nuget packages to avoid any confusion about which version of what plays nicely with the other - to the point where I've even made each package which depends on another PeanutButter package depend on the same version as itself. I wanted an easy way to update all PeanutButter packages since I release quite often - indeed, pretty-much whenever I add any functionality.

An approach to this problem might be running update-package from the package manager console. Whilst I'm a fan of keeping libraries up to date, sometimes you can get unexpected consequences from this action such as breaking a site depending on an older version of ASP.NET MVC. I have no control over those other packages -- but I do control PeanutButter, so what I need is something more like update-peanutbutter.

And now I (and anyone else using PeanutButter) have it.

Thanks to an excellent tutorial by Phil Haack on his blog You've Been Haacked, I've added the command via a module loaded from the PeanutButter.Utils Nuget package init script. PeanutButter.Utils is one of the "lowest-down" packages, so chances are very good that if you're using any of the others, you're using PeanutButter.Utils. The change is available from version 1.2.15 and the easiest way to take advantage of it would be to update one of your projects to use the latest PeanutButter.Utils and then use update-peanutbutter from the package manager console to update all the other projects in your solution.

Happy hacking (:

Monday 29 February 2016

PeanutButter 1.2.0 Release


Short on the heals of the PeanutButter.Utils.Windsor package release, I've updated PeanutButter for all projects and packages to target the .NET framework version 4.5.2. If this doesn't work for you (and it really should), then don't upgrade past 1.1.44.

Why did I do it? .NET 4.5.1 reached EOL on January 16 2016. Nothing should really have changed for any PB consumers -- indeed, I've also cleaned up the nuget package structure to reflect that all packages target .NET 4.5.x and made the change unanimous across the board, thanks to a Visual Studio extension which made it super-easy: Target Framework Migrator Extension (https://visualstudiogallery.msdn.microsoft.com/47bded90-80d8-42af-bc35-4736fdd8cd13)

Anyway, just thought that the few people who read about this stuff should get a heads-up (:
 

New! PeanutButter.Utils.Windsor

 

If you've used dependency injection in .NET at any point in your life, you've probably heard of, or used, Castle.Windsor (http://www.castleproject.org/) in one or more projects. Personally, I've used Autofac, Caliburn Micro's SimpleContainer and the WindsorContainer. They're all quite good -- I guess I've just fallen back on Windsor, most especially in web projects, because it's powerful and quite easy to use.

However, there are some common functions I have to perform with the container, namely:
  • Registration of Controller-based classes for dependency injection on my MVC controllers
  • Registration of one-to-one service-to-implementations which my code provides (which, let's face it, is probably around 95% of the registration code that we use)
As with other PeanutButter modules, PeanutButter.Utils.Windsor is born out of a desire to stop writing the same (boring) code over and over and spend more time writing new (hopefully interesting) code. As such, please welcome the following extension methods available in PeanutButter.Utils.Windsor, for an IWindsorContainer:

  • RegisterAllControllersFrom(params Assembly[] assemblies)
    • Searches through the provided assemblies for controllers, matching them by having the System.Web.Mvc.Controller class in their ancestry. Matching is done by class, namespace and assembly name only, so the package doesn't require that you depend on System.Web.Mvc to use it (meaning you can still take advantage of other functionality without pulling in the entire web stack).
  • RegisterAllOneToOneResolutionsAsTransientFrom(params Assembly[] assemblies)
    • Searches through provided assemblies for all interfaces which are implemented by one non-abstract class and registers the interface as a service resolvable to one transient instance of the non-abstract class. I find that this is what I want about 95% of the time as I'm using DI more for a testing mechanism (and to make dependency chains not my problem at run-time) than anything else.
      This method will also ignore any previously-registered services, so the idea is to run it after any more specific registrations you may have such as services you may want registered as Singleton or PerWebRequest (eg EF database contexts)
Install with nuget, in the console:
install-package PeanutButter.Utils.Windsor

Thoughts? Problems? http://github.com/fluffynuts/PeanutButter is where you can raise issues or make pull requests.


Monday 8 February 2016

PeanutButter is updated to NUnit 3

NUnit (https://github.com/nunit/nunit) has been updated to version 3 for a little while now, so I thought it best to move PeanutButter forward as the test utilities which use NUnit for assertions and such have required the user to explicitly install NUnit 2.6.4.

In particular, this will affect any new installations or updates to projects using the following:
  •  PeanutButter.TestUtils.Generic 
    •    Which includes PropertyAssert
  • PeanutButter.TestUtils.Entity
    • Which includes the helpers around testing database persistence of your entity models
What this means is:

  • If you’re not depending on any of these PeanutButter modules, of course nothing changes for you. Though I’d still recommend moving forward to NUnit 3 at some point when you have some time. There are some changes and some things to deal with, so don't rush it.
  •  If you have a project which is currently stable and not in active development, don’t bother updating anything unless you really want to stay up-to-date.
  •  If you’re going to install-package PeanutButter.<something>, you may find that you’re automatically updated to NUnit 3. Mostly, it’s not a problem and the differences are easy to work around. If you get a PeanutButter package with version 1.1.x instead of 1.0.x, you’re going with NUnit 3. If you don’t want to go this route, install the last 1.0.x version: 1.0.155. You may have to specify an NUnit version to install, ie:
    install-package NUnit –project <your project> -version 2.6.4.
    I used to make my packages not depend on specific versions of anything, but I’ve recently started adding minimum versioning to packages to try to alleviate some of this headache.


What's new in PeanutButter?

Retrieving the post... Please hold. If the post doesn't load properly, you can check it out here: https://github.com/fluffynuts/blog/...