## Wednesday, 10 December 2014

### Let reason prevail

Hark! 'Tis the call of a higher-order language. It beckons with sweet words of reduced time solving ancient, low-level problems! The siren speaks true!

Seriously though...

Higher-order programming languages afford the competent programmer a method for solving more higher-level problems in the same timeframe than she could in with a lower-order language. Simply put, high-order languages (like C++, C#, Python, Ruby, PHP, etc) afford the programmer the ability to skip over the low-level "move this byte to this address" kind of programming so that she can solve more interesting and/or more lucrative problems.

There is a cost though: iron. These higher-order languages require smarter compilers and, typically, more RAM and CPU. It's a cost we're happy to pay though -- iron is cheaper than development.

But when someone starts making outlandish claims that a higher-order language is more proficient than a lower-order one (https://isocpp.org/blog/2014/12/myths-1, see the string concatenation example in the first section), the reasonable programmer doesn't just gulp that down, even if it does come from the father of C++. Actually, especially if it does come from the father of a higher-order language, since he would have reason to pad out his results (or pretend to be lazy) to make his prodigy all the more appealing.

Aside: before we go any further, a disclaimer: I like C++. I like C. That's OK to like both and recognise their strengths. What's NOT ok (imo), is to spread misinformation to highlight the language of your preference. Have honest reasons for preference -- by all means! -- and be objective about comparisions. And the discourse continues!

So let me make a really bold statement:

Any proficient code in a higher-order language can only hope to be (at best) as proficient as proficient code in a language of lower-order for solving the same problem.

Why?

Let's take the string vs char* example from above:

std::string has to be implemented (at some point) around a buffer of memory. I don't care if it's char*, wchar* or whatever. It's a buffer which, at some point, was obtained via malloc() (even if you want to say it was obtained by new char[], that still boils down to essentially a malloc, so let's stop arguing semantics). The C++ compiler affords us the ability to overload operators, such that we can do:
string1 + string2
and get another string. Under the hood, this is allocating a third string object and the associated memory and doing some memory copying. One way might be to malloc() on strlen(string1) + strlen(string2) + 1 char for the null terminator, then strcpy() and strcat() in the parts. There are quite a few ways this could be done, but this is one.

Now the problem lies here: in an opaque, higher-order implementation of string concatenation (for example), the best outcome we could hope for is the one which is fastest in C, ie which we discovered by trying all paths including strcat, memcpy, strcpy, etc. So let's assume that the best path was chosen for std::string's + operator overload. That still makes it only as fast as the best implementation in C. Take a step back and realise that the + operator may also do clever things like allocate more memory than required to save on a realloc() later as well as rudimentary bounds-checking or what-have-you, and it's easy to see why the string variant is 10-20x slower (https://github.com/fluffynuts/cpp_vs_c.git -- oddly enough, the disparity was greater on a win32 version I did earlier today where the sprintf() version took only 48ms for 32768 iterations vs over 600ms for the string version. The code linked here reports the following results on my Linux machine:

C++ function: 32768 runs took 5827 ms
C function: 32768 runs took 706 ms
C function (sprintf): 32768 runs took 4826 ms

Which still shows that the C++ version is nearly 10x slower. Some of the difference I've experienced between platforms may be to Microsoft optimisations as well as stdlib being (in my experience) much slower on Windows than Linux (not counting boost, which I haven't used, but which benchmarks well). The C++ version may optimise for frequent use better than the quoted C version -- but the C programmer is free to update her code accordingly, as required. Again, remember, we're talking about proficient code solutions, so when you change the parameters of the argument, the code is free to change too.

This shouldn't be surprising. The C++ code has to deal with the generic case (and provides a lot of extra functionality which is probably worth the cost) than the C version. But let me re-iterate:

Any proficient code in a higher-order language can only hope to be (at best) as proficient as proficient code in a language of lower-order for solving the same problem.

Note the use of the word "proficient". If I write shitty C code, I bet you can write good C++ that is faster. Same goes for any other pairing: if I write shitty low-order code, I'm sure you can write good high-order code which out-performs my shitty code.

The request I have to the programming community is this:

Please be honest and stop trying to win a "my language is better than yours" with pure lies and FUD. When speaking of well-written, proficient code, Ruby/Python/.NET IL/PHP/C++/whatever is NOT faster than C or assembly. The lower you go, the more specific your instructions can be and the more efficient the overall run can be.

We all accept the costs of higher-order languages because of what they offer us:
• Quicker to code solutions to the problems which are interesting and/or lucrative, therefore cheaper (overall) than the iron required to run the output
• Safe memory handling
• Rich libraries
• Abstraction from the iron
• Easier-to-read (and therefore maintain) code
• And a host of other benefits

I'm open for challenges on this though. Provide a problem which is relatively small to solve in a lower-order language which you think is faster (not smaller or more elegant) in a higher-order language and I'll see what I can do prove my point. Remember: a small problem, like something mathematical or string manipulation.

## Thursday, 30 October 2014

### Javascript and promises

I wrote a little tutorial a small while back on using promises within Javascript as this seems to be a point of confusion for some, especially those new to promises. The tutorial is not meant to be ultimately comprehensive, but hopefully it should at least be useful -- and it made me polish up a promises library of my own for learning and display purposes. You can get all the code here: https://github.com/fluffynuts/js-promises-tutorial and indeed that repository is what I'm using for the iframe source below, using RawGit. Mainly because it relies on a bunch of Javascript and JSON files to work but also because I'm way too lazy to maintain multiple versions of this and this tutorial may be revisited for a little spit 'n polish on occasion. (EDIT: this entry was updated to include example usage of the ES6 native Promise prototype).

## Tuesday, 23 September 2014

### Shout-out to the kind folks at JetBrains

The other day I received a reminder mail that my R# license is due to expire in a few weeks. Now, I bought that when I was doing some work for a personal client and I really needed both VB.NET and C# functionality from my R#. So unfortunately, despite the kind offer from my company to use one of the roaming licenses they have for C#, that wasn't going to cut the mustard. I bit the bullet and got my own R# and reaped the productivity benefits, even though I'm quite sure I use only around 40% of the functionality offered by R#.
Anyways, I responded to the mail with a query regarding PeanutButter and the JetBrains R# OpenSource licensing program and... have been awarded an OpenSource (ie, no-charge) license for this year for R#, Full Edition. Thanks JetBrains! If you use C# and/or VB.NET (and, now, I hear C++ is in EAP) for development, in Visual Studio, Resharper can probably boost your productivity with navigation, code cleanup and refactoring functionality. Head on over to http://www.jetbrains.com/resharper/ to check it out if you don't use it already. The pricing may seem like quite a bit (and I'll freely admit that it's not cheap), but you can make that back quite quickly with the time it saves you, especially if you program TDD-style and take pride in your code, refactoring it to make it expressive and therefore more easily maintained.

## Wednesday, 17 September 2014

### Browsers for everyone!

The browser wars are over!

Or are they? Will they ever be? I certainly hope not. With Opera deciding to bow out of the render wars and move to an in-house-maintained WebKit backend instead of Presto, my heart sank a little. Presto had, in my opionion, been responsible for all of the other browsers dragging themselves, kicking and screaming, to support the latest CSS standards. It was just plain embarrassing to be smashed by the ACID test by one of the "little guys". Microsoft, Mozilla and Google had to step up, had to (at least try to) keep pace.

Competition is good, most especially in the software realm, most especially for the most important player in the software realm. No, not corporations selling you their software and shiny hardware products. Not institutions who run the world of finance and could cause the collapse of life as we know it with a few bad lines of code. The most important person in the software realm is... you. The user. And the most powerful tool at your disposal is choice. As users are free to choose, so they are free to ditch software which is left in the dust by other software. It's this competition which improves software all-round, ultimately for the benefit of the user (and hopefully, for the companies involved, adds something to their bottom line. Somehow).

So enter another contender: Maxthon MxNitro. Maxthon has been in the game for a while, but, as did Opera before them (and they still do), they occupy a small segment of the browser market. You need something really significant to get the average person to bother to not use the system default browser -- hence the anti-competitive lawsuits against Microsoft (but, in their defense, wouldn't it be rather shameful to sell you an operating system without a browser? About as shameful as giving you an OS without an office suite or baked-in programming tools, but I digress...)

I've used Maxthon before and it was more of an exercise in curiosity than anything else. Finding out about MxNitro, I had to see what they were going on about with the speed they claim the browser has. There are a bunch of numbers floating around; information like "30% faster than Chrome 37" and such. Interesting claims, and no explanations for how, except some hand-waving about how hard they worked to lighten it up and optimise it.

My verdict, if you care:

Startup is fast. I mean, not just fast, but zippy-gawrsh-how-the-heck-did-they-do-that fast. A first run of Chrome on my machine (i7, 8Gb RAM, no ssd's so I'm a bit penalised there) takes around 8-10 seconds. Closing (checking that Chrome isn't running) and re-opening brings that down to around 2. MxNitro is available for use the moment, the Start screen has faded from view, as if it were waiting there the whole time. I had to check several times for some sneaky background process or service -- none. The install was the same: I double-clicked the installer and had a working browser up in under a second. I don't even know how they managed to unzip the required files to its install folder (in your roaming profile) that quickly. I'm baffled -- and impressed. I want to know more...

Memory usage on the other hand: terrible, just like Chrome. There are reasons I don't use Chrome as my daily driver, but the largest is simply memory usage. Chrome can happily consume a few gigs of memory where the same tabs in Firefox will be well under a gig. Even 2 or 3 tabs in Chrome starts getting up to the gig mark. Opening the exact same 5 tabs in Firefox and Chrome got me a memory usage of 431Mb in Firefox and just shy of a gig in Chrome (disclaimer: to get Chrome's usage, you have to add up the memory of all processes so I just added roughly; I'm not trying to exaggerate with the "just shy of a gig" comment, but it was in the high 900's). MxNitro also uses a multi-process model. Roughly adding up the memory used for the same tabs came in at around 700mb, which really is where Chrome would probably be if I stripped out all addons and the dev tools (another disclaimer: I have around 10 addons in both Firefox and Chrome). So, just when I was thinking that this might be a good candidate for my aging Core2 Duo laptop (which only has 2Gb of RAM, not upgradeable )': ), I have to think twice. I did give it a spin there -- 700 mb for gmail, facebook and twitter open. Hm.

It has a very minimal look and feel in the standard retina-blasting white of many apps these days. It's also rougher than a badger's arse:

• No way to search a page for text
• No dev tools at all -- indeed, I was surprised to find a "view source" option
• Crashed when I tried to scroll with my touchpad. Every time
• No spell-checker -- blogging (like this article) just doesn't feel as safe. I'm going to have to save this draft and reload in another browser to check it
• Indeed, no preferences whatsoever
• Weird window control-box (minimise, maximise, close) which only becomes available when you hover over it, though for no really good reason as the space isn't put to better use. It's just a little bit of "wat?" to overcome
• No Flash

The bare-bones minimal interface will probably work well for some people -- in fact, for a lot of people. There are a lot of people who are out there on the web and they don't know, need to know, or even care about things like dev tools or extensions. They don't know about ad blockers (though they really should be interested) or sync or many of the shiny features of Firefox and Chrome (and even IE, now that it's been dragged to version 11). The problem is that the people for whom this interface will work are

• The most likely to give up on a browser which doesn't have basic features like in-page search
• The most likely to give up on a browser which crashes
• Quite likely to need Flash and unable to understand why their Facebook videos aren't playing

Not that I could be any happier that there's no bundled Flash (indeed, I welcome the long-overdue demise of Flash) -- just that it's going to hamper the people most likely to use this.

So will I use it? Probably not much, if at all. It's a bit novelty for me at the moment -- "oooo, look how fast it is!". But then again, it starts and acts about as fast as FooBrowser, a minimal, keyboard-driven browser I wrote with PyQt ages ago. My browser suffered from the same flash issue, but at least find-in-page worked. I think.

Nope, I'll be sticking with Firefox (Aurora channel). Chrome has better dev tools (so I'll fire it up for dev, especially remote-debugging Cordova apps), but is too much of a memory glut. IE has just lost my trust. But bigger than that, Firefox has the addons I want (some of which I can get elsewhere, sure): an ad-blocker, Stylish and a Youtube downloader. The last one is another big reason why I can't use Chrome for my daily browser, in addition to the silly full-screen view which means that tabs are obscured when I have WinAmp open -- I still don't get why the Chrome team is SO opposed to allowing a small titlebar on Windows; you can work it on Linux because of the WM, but on Windows, it's better to resize the window to fill the screen than use maximise. Daft. And I'm far from alone in this quest. But a Youtube downloader -- that I can't do without. Not because I'm some naughty pirate or video hoarder, but simply because I want to watch unhindered in at least 720p and Youtube

• insists on picking "the best" resolution for me every time, which I have to toggle off
• insists on turning captions on with every video -- yet another thing to toggle off
• has become an ad spawn-point. I understand the site is financed by advertising. I'm ok with an ad here and there. But every 5 minutes in a video? Bugger off
• still often insists on playing through Flash, which runs terribly and sometimes takes down my browser
So I download, watch, delete. But the path of digression is here again....

The verdict?

Maxthon MxNitro is interesting because of the speed, but nowhere near ready for public consumption yet. With a bit of work, it could become a good replacement browser for mom-and-pop types with older hardware, providing they have enough RAM. The really good part is that hopefully the gauntlet has been thrown down and other browser houses will follow suit. I especially hope that Mozilla does -- Firefox wins the RAM wars but trails a little in the speed department.

## Wednesday, 2 July 2014

### A little PeanutButter for your MVC

ASP.NET MVC is one of the best things that I could possibly think of to have happened to the web from a Windows-centric point of view. At last, there's a way to sweep that abomination that is WebForms under the proverbial carpet.

Like anything, though, it does have its caveats. You have useful constructs like Script and Style Bundles -- but no easy way to test them from a CI environment. Also script inclusion becomes a bit more manual than it needs to be when viewed though the lens of AMDs like require.js. But you do get the advantage of bundling in that a single request can satisfy multiple code/style requirements. (Let me be clear here: AMDs are good. I like require.js. But it does take a little more effort to set up and get working correctly and you don't (without even more configuration) get the hit-reduction that bundles provide. Both methods have their advantages. Select your tools for your tasks as they fit best for you, on the day.)

PeanutButter.MVC was built out of a need to make those processes slightly more testable and elegant.

First of all, there are two facade classes:
They wrap ScriptBundle and StyleBundle accordingly and implement interfaces of the expected names (IScriptBundle and IStyleBundle). You would use them like you'd use ScriptBundle and StyleBundle instances from the MVC framework. However, since they implement an interface, you can also create substitutes for them so that you can test-constrain your bundle registration process. This is important because I found that it was not uncommon to add a new javascript or css file to the solution, build-and-run, and be surprised that my changes weren't in play -- until I realised that I hadn't bundled them.

For example, I have the following method on my BundleConfig:

public static void RegisterBundles(BundleCollection bundles,
Func<string, IScriptBundle> withScriptBundleCreator = null,
Func<string, IStyleBundle> withStyleBundleCreator = null)
{
withScriptBundleCreator = withScriptBundleCreator ?? ((bundleName) => new ScriptBundleFacade(bundleName));
withStyleBundleCreator = withStyleBundleCreator ?? ((bundleName) => new StyleBundleFacade(bundleName));

}


We can see that the function would ordinarily be invoked without lambda factories to produce Script- and StyleBundleFacades, so it produces its own, very straight-forward ones. However, the tests that constrain this method can inject lambda factories so that the bundling methods can be tested to ensure that they include the required bundles from the relevant sources. Indeed, the tests are quite straight-forward:
    [TestFixture]
public class TestBundleConfig
{
private Func<string, IScriptBundle< CreateSubstituteScriptBundleCreator(List>IScriptBundle< withTrackingList)
{
return (name) =>
{
var scriptBundle = Substitute.For<IScriptBundle>();
scriptBundle.Name.Returns(name);
var includedPaths = new List<string>();
var includedDirs = new List<IncludeDirectory>();
scriptBundle.IncludedPaths.ReturnsForAnyArgs(args =>
{
return includedPaths.ToArray();
});
scriptBundle.IncludedDirectories.ReturnsForAnyArgs(args =>
{
return includedDirs.ToArray();
});
scriptBundle.Include(Arg.Any<string>()).ReturnsForAnyArgs(args =>
{
return new Bundle("~/");
});
scriptBundle.IncludeDirectory(Arg.Any<string>(), Arg.Any<string>())
.ReturnsForAnyArgs(args =>
{
includedDirs.Add(new IncludeDirectory(args[0] as string, args[1] as string));
return new Bundle("~/");
});
scriptBundle.IncludeDirectory(Arg.Any<string>(), Arg.Any<string>(), Arg.Any<bool>())
.ReturnsForAnyArgs(args =>
{
includedDirs.Add(new IncludeDirectory(args[0] as string, args[1] as string, (bool)args[2]));
return new Bundle("~/");
});
return scriptBundle;
};
}

private Func<string, IStyleBundle> CreateSubstituteStyleBundleCreator(List<IStyleBundle> withTrackingList)
{
return (name) =>
{
var styleBundle = Substitute.For<IStyleBundle>();
var includedPaths = new List<string>();
styleBundle.IncludedPaths.ReturnsForAnyArgs(args =>
{
return includedPaths.ToArray();
});
styleBundle.Include(Arg.Any<string>()).ReturnsForAnyArgs(args =>
{
var paths = args[0] as string[];
return new Bundle("~/");
});
return styleBundle;
};
}

[Test]
{
//---------------Set up test pack-------------------
var collection = new BundleCollection();
var scriptBundles = new List<IScriptBundle>();
var styleBundles = new List<IStyleBundle>();
//---------------Assert Precondition----------------

//---------------Execute Test ----------------------
BundleConfig.RegisterBundles(collection,
CreateSubstituteScriptBundleCreator(scriptBundles),
CreateSubstituteStyleBundleCreator(styleBundles));

//---------------Test Result -----------------------
Assert.AreNotEqual(0, scriptBundles.Count);
Assert.AreNotEqual(0, styleBundles.Count);

Assert.IsTrue(scriptBundles.Any(sb => sb.Name == "~/bundles/js/shared" &&
sb.IncludedDirectories.Any(d => d.Path == "~/Scripts/js/shared" &&
d.SearchPattern == "*.js" &&
d.SearchSubdirectories == true)));
}
}


All good and well. We can ensure that our MVC application is creating all of the required bundles. It would also be super-neat if we could streamline the inclusion process. Of course, we can.

PeanutButter.MVC also includes a utility called AutoInclude. If we decide to set up our bundles under /bundles/js/{controller} (for scripts for any action on the controller) and /bundles/js/{action}, then a lot of inclusion work can be done for us in our base _Layout view with a single line (assuming you've included the relevant @using clause at the top):

@AutoInclude.AutoIncludeScriptsFor(ViewContext)


AutoInclude uses the convention of scripts sitting under folders with names corresponding to the controller, with the casing of the scripts folders lowered to be more consistent with how script folders are named. This one line has, in conjunction with judicial bundling (and testing of that bundling!) allowed all views to just "magically" get their relevant scripts. In my project, I can create script bundles which include similarly-named folders and not have to worry about how my views get relevant logic scripts from there on out.

So, for example, I might perform registrations like the following (where scriptBundleCreator is a passed in Func<iscriptbundle>):

bundles.Add(scriptBundleCreator("~/bundles/js/policy")
.IncludeDirectory("~/Scripts/js/policy", "*.js", false));
.IncludeDirectory("~/Scripts/js/policy/accept", "*.js", false));
.IncludeDirectory("~/Scripts/js/policy/edit", "*.js", false));


Now, from the Policy controller, I have two actions, Accept and Edit. Both have their relevant views, of course, and the AutoInclude is done automatically for them by virtue of the fact that they use the default _Layout.cshtml. Under my Scripts folder in my project, I have a file structure layout like:

policy
policy/common.js
policy/accept
policy/accept/accept.js
policy/accept/proposalEmailer.js
policy/edit
policy/edit/clientDetailsDisplayUpdater.js
policy/edit/edit.js
policy/livePolicyUpdater.js


And the result is that the Policy/Accept view gets common.js, accept.js, lead-autocompletion.js and proposalEmailer.js. The Policy/Edit view gets common.js, clientDetailsDisplayUpdater.js, edit.js and livePolicyUpdater.js.

So now I'm free to create small, easily-testable javascript files (which I'll test with Jasmine and whatever works best for my purposes (eg karma or the Resharper unit test runner -- which works, mostly, with Jasmine, but has a few rough edges)). And when I want them in a page, I just drop them in the appropriate folder to get them on the next compile/debug run. And because of bundling, the end-user doesn't have to get many little hits for javascript files, instead, just getting two per view.

Apart from the testability of it and the simplicity of adding another piece of javascript functionality to the site, there's a huge bonus in grokkability. Let's face it: one of the reasosn why tests are good on your code is for when a new developer comes onto the project (or some unlucky person is tasked with maintaining some code they had nothing to do with). Tests provide feedback for when something breaks but also provide a communication mechanism for the new developer to figure out how discreet parts of the overall machine work. To the same end, understandable symbol and file naming and unsurprising project layout can really help with a new developer (or when you just have to get back on to the project for maintenance or extension and it's a couple of months down the line...)

Anyway, so there it is: PeanutButter.MVC. Free, small, doesn't depend on much, and hopefully useful. I'm certainly reaching for it the next time I'm in MVC land.

## Tuesday, 1 July 2014

### INI files are dead... Long live INI files!

There was a time when INI files ruled the world of configuration. Since then, we've been told on numerous occasions by many people that we should rather be using XML. Or a SQLite database. Or something else, perhaps.

Now, don't get me wrong -- SQLite has its merits and XML is great if you want to store hierarchical data or if you need to configure your .NET application (which happens to already speak the lingo). But the reality is that INI serves quite well for a number of uses -- indeed, it can also be used to store hierarchical data, as you'd see if you checked out the innards of a .reg file. In particular, INI files are dead-easy to parse, both by machine and man -- and the latter is an advantage if you have nothing to hide and no need for quick read/write (where you might, for example, use SQLite). It's also a simple file-store so platform and library requirements are minimal. It's probably the easiest way to store structured configuration data and I still use it for projects unless I absolutely have to use something else.

A relatively small, simple part of the PeanutButter suite is the INI reader/writer/storage class PeanutButter.INI.INIFile. Usage is quite simple:

var ini = new INIFile("C:\\path\\to\\your\\iniFile.ini");
var someConfiguredValue = ini["colors"]["FavouriteColor"];
ini["Geometry"]["Left"] = "123";
ini.Persist();


In thesnippet above, we instantiate an INIFile class with a path to a file to use as the default persistence store. This file doesn't have to exist right now (and if it doesn't, it will be created with the Persist() call).

INIFile presents the data present in the source as a Dictionary<string, Dictionary<string, string>>, with indexing on the INIFile instance itself, making the syntax quite easy to use. Sections are created as and when you need them. Section and key names (such as "Geometry" and "Left" above) are case-insensitive to make access easier (and more compliant with the behavior of the older win32 calls for INI handling).

The parser tolerates empty lines and comments as well as empty keys (which are returned as an empty string).

Of course, you don't have to have a backing store to start with (or at all), and you can always override the output path with a parameter to Persist(). In addition, you can re-use the same INIFile, loading in a file from another path with the Load() method or loading with a pure string with the Parse() method.

Once again, the class has been developed on an as-required basis. It does much of what I want it to do (though I'd like it to persist comments on re-writing; that may come later). I hope that it can be of use to someone else too. I've lost count of how many times I've implemented an INI reader/writer. Hopefully, this is one of the last...

## Monday, 30 June 2014

PeanutButter was born out of a desire to share code between projects, in particular, to take advantage of utility and fixes introduced during the development process on a range of concurrent projects in the realm of code which was specific to none.

Quite basically, I had some code which was already figured out and worked fairly well for what it was intended to do and I was just plain too lazy to maintain multiple versions of that code. I thought it would be great if I could use, say, the Nuget packaging system to spread the most up-to-date versions of code between projects, if only packaging for Nuget wasn't such a pain. The CLI tools work, but aren't easy to use. The official GUI for Nuget packaging looks like a revenge unleashed on the world by an angry development manager trying to prove he can code. No offense.

Ok. Offense. The Nuget GUI tools are horrid and the CLI packaging mechanism is a PITA. Thank goodness for the Nuget Package Template extension for Visual Studio. With a simple post-build event from each relevant package, I can push all 11 (currently) PeanutButter.* Nuget package updates after having run all my tests and switched to Release mode. Win!

First, taking a step back. If you've been using a decent ORM like Entity, NHibernate, Habanero or even just Linq-to-SQL, you don't have a need for PB's DatabaseHelpers. You can save yourself the effort of reading the rest of this article. One of the primary reasons for using an ORM (imo) is to abstract the SQL away from the application. ORMs which do Linq well are especially good at this -- and that's one reason why I've been a fan of Entity (warts and all) for quite some time. None of them are perfect but they all can take a lot of the pain of dealing with direct database calls away. In particular, an ORM allows you to:
• Switch database backends (eg MSSQL/Firebird/MySQL/PostgreSQL and others). Of course backend support depends on the ORM, but most give you some kind of choice here.
• Not have to type out correct SQL statements in your code. You may be wondering why this is a problem, unless, of course, you've had the experience where SQL in your code worked once and stopped mysteriously. After back-tracking VCS commits, you find that someone accidentally changed a string and there was nothing to pick it up. Or perhaps there was a test and the test was just as wrong as the code it was constraining.
• Get the compiler to check that you've gotten the names of your database entities correct -- if you've misspelled an entity in your code in one place, chances are your code doesn't compile. Which is good -- the earlier up the code/compile/run/debug/package/deploy/test/etc chain you fail, the less it costs everyone.
• Not have to worry about SQL injection attacks.
If you're using a super-light ORM like Dapper or just simply converting a giant project with heaps of direct ADO access or perhaps providing support for an app using an unsupported backend (like Access), you may have to get down to the bare SQL. But it would be great if you didn't have to actually take the risk of writing some SQL. Better still if the tool producing the SQL to run on your backend can be tweaked to target a different backend as required.

So here's a possible approach:

1. Ensure that all of your database entities are defined as string constants in one accessible source file so that you aren't constantly fighting with each other on how to spell "color". Or "colour". However you all choose to spell it. And you aren't left to discover that "received" is spelled "recieved" sometimes in your SQL at go-live time.
2. Get a tool like PeanutButter to do the boring work for you. Reliably and in a manner you can test, with fluent builder syntax.
Sound like a plan? Great (:

First, I like to stick to using a DataConstants file to hold constants like field names, default values, etc. The other advantage here is that you can reference the same DataConstants in your FluentMigrator migrations (you are using FluentMigrator, aren't you?!). For example:

namespace MyProject.Database
{
public static class DataConstants
{
public static class Tables
{
public static class Employee
{
// to use where you would reference the name of your table
public const string NAME = "Employee";
public static class Columns
{
public const string EMPLOYEEID = "EmployeeID";
public const string FIRSTNAME = "FirstName";
public const string SURNAME = "Surname";
public const string DATEOFBIRTH = "DateOfBirth";
}
}
}
}
}

Next, let's say we wanted to get a list of all Employees whose first names are "Bob". We could do:

var sql = SelectStatementBuilder.Create()
.WithTable(DataConstants.Tables.Employee.NAME)
.WithField(DataConstants.Tables.Employee.Columns.EMPLOYEEID)
.WithField(DataConstants.Tables.Employee.Columns.FIRSTNAME)
.WithField(DataConstants.Tables.Employee.Columns.SURNAME)
.WithField(DataConstants.Tables.Employee.Columns.DATEOFBIRTH)
.WithCondition(DataConstants.Tables.Employee.Columns.FIRSTNAME,
Condition.EqualityOperators.Equals,
"Bob");


Ok, so this looks a little longwinded, but with a using trick like so:

using _Employee = DataConstants.Tables.Employee;
using _Columns = DataConstants.Tables.Employee.Columns;

// some time later, we can do:
var sql = SelectStatementBuilder.Create()
.WithTable(_Employee.NAME)
.WithField(_Columns.EMPLOYEEID)
.WithField(_Columns.FIRSTNAME)
.WithField(_Columns.SURNAME)
.WithField(_Columns.DATEOFBIRTH)
.WithCondition(_Columns.FIRSTNAME, Condition.EqualityOperators.Equals, "Bob");


Now that's fairly readable and always produces the same, valid SQL. And will break compilation if you misstype something. And you can use Intellisense to help figure out required column names. This is a fairly simple example; just a taste. PeanutButter.DatabaseHelpers includes:
• Statement builders for:
• Select
• Update
• Insert
• Delete
• Data copy (insert into <X> select from <Y>)
• The ability to do Left and Inner joins
• Order clauses
• Where clauses
• Automatic quoting of strings, datetimes and decimals to values which will play nicely at your database (no more SQL injection issues, no more issues with localised decimals containing commas and breaking your SQL)
• Interfaces you can use for injection and testing, for example with NSubstitute (ISelectStatementBuilder, IInsertStatementBuilder, IUpdateStatementBuilder, IDataCopyStatementBuilder, IDeleteStatementBuilder)
• Another helper package PeanutButter.DatabaseHelpers.Testability which sets up NSubstitute mock objects for you so you can easily test without having to stub out all of the myriad builder returns on statement builders
• Executor builders:
• ScalarExecutorBuilder for insert/update/delete statements
• With interfaces for easy testing
• Support for some computed fields (Min, Max, Coalesce, Count)
• Constrained by tests, developed TDD.
• Syntax support for Access, SQLite, MSSQL and Firebird. Additional syntax support can be added upon request, given enough time (:
All in a convenient Nuget package you can use from your .NET application (install-package peanutbutter.databasehelpers), irrespective of target language. PeanutButter.DatabaseHelpers was developed and is maintained in VB.NET simply because of its origins, but that really doesn't matter to you as the consumer (:

Sure, there are features which are missing. This library has been built very much on the premise of extending on requirement. I hope that it can be helpful for someone else though. It has been used in at least 5 decent-sized projects.

I welcome feedback and will implement reasonable requests as and when I have time. Like the rest of PeanutButter, DatabaseHelpers is licensed BSD and you can get the source from GitHub.

## Introducing... Peanut Butter!

I've been writing code for about 15 years now. It's not really that long, considering the amount of time that friends of mine have been writing code, but I guess it's not an insignificant portion of my life.

Anyone who writes code eventually figures out that there are some common problems that we solve time and again; and the time we spend solving them is time we could have spent solving more interesting problems.

Such common problems include (but are certainly not restricted to):
• INI file parsing and writing
• Random value generation (especially for testing)
• Win32 polling services
• Dealing with temporary files which need to be cleaned up as soon as we're done with them
• Writing SQL statements (yes, sometimes we don't have ORM frameworks to help us (eg when we have to deal with Access) and sometimes we use super-light frameworks like Dapper which need us to do the SQL grunt work ourselves)
• Systray icons
There are many more, of course.

There are a few ways we could deal with this scenario. We could re-write, from scratch, algorithms to provide solutions to these common problems. We could copy-and-paste known working code into our solutions every time. We could try to find someone else who has provided a library to help (and we certainly should, before embarking on rolling our own, if we value our time).

PeanutButter (https://github.com/fluffynuts/PeanutButter) is a suite of small libraries aimed at tackling such common tasks. I built it out of a desire to re-use bits of code that I'd spent reasonable amounts of time on where those bits of code could perhaps save me time later. PeanutButter modules are available from Nuget and enable me (and anyone else who so desires) to spend more time on interesting tasks and less time on the more common ones.

I hope to spend time introducing the different modules of PeanutButter in more depth. I really hope that this code and blog can perhaps save someone a little time and effort. Even if I'm the only person to use PeanutButter, I'm OK with that -- it serves me well. But if it can make dev a little friendlier for someone else, all the better.

PeanutButter is employed in a few commercial products. Licensing is BSD, so you're free to use it, fork it, break it and keep all the pieces. Feedback is welcome and contributions will be considered.

Off the bat, PeanutButter offers (in small, fairly independant Nuget modules):
• PeanutButter.DatabaseHelpers
• provides builders for Insert, Update, Select, Delete and data copy (insert into X select from Y) SQL statement builders for Microsoft-style databases (MSSQL, Access), SQLite and Firebird (other dialects can be added on request/requirement)
• DataReader and Scalar executor builders
• Connection string builder (Access only, but I plan to expand this)
• PeanutButter.INI
• Provides reading/parsing and writing of INI files in a very small library with a very simple interface
• PeanutButter.MVC
• provides shims and facades to make testing MVC projects easier, especially when you'd like to constrain script and style bundle logic in your tests
• PeanutButter.RandomGenerators
• produces random values for all basic types (string, long, bool, DateTime), enums and selecting a random item from a collection
• includes GenericBuilder, a builder base class which can build random versions of any POCO class. GenericBuilder is extensible, but doesn't do collections since I haven't determined what the optimal course of action is there. Still, it's proven to be very useful and has freed literally hours and hours of dev time that I (and others) would have spent on generating random input for tests requiring complex objects
• PeanutButter.ServiceShell
• provides a simple shell for Win32 polling services. You just have to set up the name, description and interval (default is 10 seconds) and override RunMain. Your Program.cs has one line to run in it and suddenly you have a service which:
• Polls on a regular interval, guaranteed not to overlap (runs which exceed the poll interval just result in the next run being executed immediately)
• can install, uninstall, start and stop itself from the command line
• There's an example project, EmailSpooler, which demonstrates usage. EmailSpooler is a win32 service which polls an email registration table in your database for unsent emails and, using a configured SMTP server, attempts to send those mails, with handling for failed sends and backoff time. Whilst being an effective example for how to use the Service Shell, it was also an application actively used in production... until the client and I parted ways. It's generic enough for anyone to use though, and it's free. Help yourself.
• PeanutButter.TestUtils
• provides a PropertyAssert class to easily test equality of properties on disparate object types, using property names and types. It's great for reducing the amount of time spent writing tests to ensure that DTO property copies take place as expected.
• PeanutButter.TinyEventAggregator
• provides a "prism-like" event aggregator with very little overhead. In addition, it can log subscriptions and publications to the Debug output so you can figure out why your program is misbehaving (especially when you forget to unsubscribe...) and provides convenience methods like SubscribeOnce (to just be notified on the next publication) and, of course, the ability to subscribe for the next [N] events. You can interrogate subscriptions at run time and even pause and resume eventing which is quite useful from within winphone apps, when your app is backgrounded and resumed.
• PeanutButter.TrayIcon
• provides a simple way to create and use systray icons, using register methods to create menu items with callbacks
• provides methods to launch and respond to clicks on bubble notifications
• provides an animator class to animate your tray icon, given a set of icon frames
• PeanutButter.Utils
• provides AutoDeleter, a class implementing IDisposable which will clean up the files you give it when it's disposed (ie, when leaving the using scope)
• provides AutoDisposer, which works like AutoDeleter, on disposing multiple registered IDisposable objects. Great for reducing your using nests
• AutoLocker which locks a Semaphore or Mutex (though you should avoid win32 mutexes as they are, imo, broken) and releases when disposed so you can use a using block to ensure that your locks are acquired and released even if an exception is thrown
• PeanutButter.WindowsServiceManagement
• provides mechanisms to interrogate, start, stop, pause, resume and just plain "find out more about" windows services.
• PeanutButter.XmlUtils
• for the moment, just provides a simple Text() extension method to XElements to get all of their text nodes as one large string.
PeanutButter is developed with TDD principles. There are currently only 343 tests, but that number climbs as features are added, of course. Some things (like Windows service manipulation) are difficult to test, so they're not as well (if at all) covered. Other things are developed "properly": test-first. If nothing else, there's a collection of code which can wag a finger at me when I break something, before I upload it and break something of yours (:

I will be elaborating on each one a little more, with some code samples, in the near future. For the moment, it's enough that I've announced them. Oh yeah, and version 1.0.50 is available through Nuget and on github.
Introduction

Who am I? What am I? Why should you care? Who actually does care?