Friday, 30 October 2015

Razor views and publishing woes



If you've done any ASP.NET MVC work, especially in VS2015, you may have encountered this:

  •  You’re devving away on a controller and return View(withSomeNiftyModel).
  • You alt-enter or alt-. to get VS to create the view
  • You make some amazing html view. It has tags and content and everything!
  • You test the view using an F5 or Ctrl-F5 and it’s doing what you want and looking how you want (though your designer, who may or may not be named Jess, probably has sterner words about how it looks)
  • Some time later, you build a deployment package and upload to the client.
  • If the internet gnomes are with you, soon you will deploy at the client. If not, you’ll wait… and then deploy at the client
  • And then you get the dreaded “view not found error page”:







Well, you don’t have to continue with this sad existence. You could add this snippet to your web project’s .csproj file:

  <Target Name="EnsureContentOnViews" BeforeTargets="BeforeBuild">
    <ItemGroup>
      <Filtered Include="@(None)" Condition="'%(Extension)' == '.cshtml'" />
    </ItemGroup>
    <Error Condition="'@(Filtered)'!=''" Code="CSHTML" File="$(MSBuildProjectDirectory)\%(Filtered.Identity)" Text="View is not set to [BuildAction:Content]" />
  </Target>

(Place it after the ItemGroup which contains your views, if you’re looking for a spot to put it)

And then, voila! Your project doesn’t build unless all views are set to build action content:



Kudos to the solution here: http://stackoverflow.com/questions/27954267/make-sure-all-cshtml-files-are-set-to-be-content-for-build-action (though I’ve upgraded from a warning to an error, because, well, it should be).


Thursday, 29 October 2015

Please welcome PeanutButter 1.0.118!

 (if you don't know what PeanutButter is -- apart from a yummy spread and a rather silly name for an open-source project -- have a gander here: http://davydm.blogspot.co.za/2014/06/introducing.html)

What's new:
  • Updates to DatabaseHelpers:
    • Fluent syntax for conditions (.And() and .Or() extension methods)
    • EqualityOperators changes
      • Like_ is deprecated (but still functions). It is replaced by Contains, with the same functionality
      • Like is new: does not insert any wildcards -- that's up to you! (also, finally, has a reasonable name, now that I know how to use keywords as identifiers in VB.NET. Yes, the DBHelpers are written in VB.NET. Get over it.)
      • StartsWith: does what it says on the tin
      • EndsWith: also does what it says on the tin
    • Added ConcatenatedField for when you want some composite result from a few fields
    • Updated CalculatedField: All *Fields now implement IField, so all of the existing funcationality which SelectFields could give you can also be found for the others (conditions, etc)
  • Enumerable convenience functions
    • And() -- like Union() but produces an Array
      • eg new[] { 1, 2 }.And(3) is equivalent to new[] { 1,2,3}
    • ButNot() -- like Except(), but produces an Array
  • RandomGenerators:
    • RandomValueGen.GetRandomCollection
      • Takes a factory function and optionally min/max number of items
      • eg to get a random-sized IEnumerable<int> of random ints, do:
        RandomValueGen.GetRandomCollection(() => RandomValueGen.GetRandomInt())
    • RandomValueGen.GetRandomSelectionFrom
      • picks some random items from another collection
  • ExpressionHelper: 
    • GetMemberPathFor:
      use to get full property paths from lambdas, eg
      ExpressionHelper.GetMemberPathFor<Foo>(fooItem => fooItem.SomeProperty.SomeValue)
      gets you
      "SomeProperty.SomeValue"
      Useful for testing or somewhere where you need to get to properties dynamically
    • GetPropertyTypeFor:
      Similar usage to GetMemberPathFor, but returns the property type
  • Of course, there are some bugfixes
  • And probably other stuff I forgot... It's been a while since I tooted on the PeanutButter horn.
 Like other PeanutButter bits and pieces, these have been born out of a desire for convenience, cleaner code and/or getting the compiler to check stuff before hitting the runtime. I hope they benefit others -- I know that I use a little PB every day (:

As always, usage, source availability and distribution remain totally free.

Monday, 10 August 2015

Introducing... Splunk4Net!

 

One of the cool parts of working for Chillisoft is that we have time set aside by the company for learning and experimenting. We call those times "Intentional Learning" and "Deliberate Practice". Think "Google 20% time", but with a little more focus.

The last round has been focused on tinkering on projects of interest to people working at the company. One of my colleagues is working on using Unity to build a virtual world interface to a part of an existing product. Some others have worked on a Neo4J .NET connector. There are some cool things which emerge from these processes!

I had recently dealt with Splunk, a remote hosting for logging which not only provides a rich query language to deal with the collected data, but also provides graphing mechanisms for making your data come alive visually. They have many different pricing structures, starting at a free version which is probably enough to cover the needs of quite a few projects and teams, right up to enterprise editions which cope with high volumes of data and provide rich security features.

A lot of the .net world has already dealt with the great log4net library: an open-source logging framework which supports writing to a diverse plethora of targets, such as log files (with rotation even!), Windows Event Log and many, many more.

So log4net is cool, Splunk is cool -- how about making it easy to log to Splunk using known log4net skills?

Well, now you can, with Splunk4Net, which is available via nuget. Thanks to the community-oriented thinking at Chillisoft, this library is not only free as in beer, but you have full access to the source -- you can build it, you can change it, you can redistribute it.

I hope this provides some value to people looking to log to an external service, especially to be able to harvest valuable information from that logging. You could use this to keep on top of error conditions in your client code or websites. You could use this to gather metrics about application usage to guide further development. You could just use this as a diagnostic tool to figure out why client apps are falling over. How you use it is up to you -- feel free (:

Friday, 10 July 2015

Using an aging Android phone

I can't help it: I love Android. It has quirks. It has rough edges. It has a million versions and many of them are dead-end forks by vendors who just don't care. It also has Linux at it's heart, and, with a little nudging, it can act like it too...

I use a Samsung Galaxy S3 (i9300). This is quite blatantly "old tech" -- the S3 is well over 4 years old and Samsung, like most vendors, has long ago spurned it when it comes to the concept of OS updates. For a long time, CyanogenMod was the answer, until the fateful day that the i9300 maintainer decided he had a life and wanted to spend more time doing that.

Phooey. But, srsly, good for him and his family.

Still, that leaves the i9300 user in a pickle. CyanogenMod has quite a strict rule about what devices are supported: basically, there has to be a sanctioned maintainer for that device who is willing to take responsibility for it. With no maintainer, the i9300, one of the most popular handsets ever sold, started fading into obsolescence.

There were (still are) some rather hearty movements to revive CM on the i9300. I have followed at least two of them for a while, updating from them. Alas, they all appear to fizzle out over some time.

One supporting ROM which doesn't seem to be willing to let go of this dog just yet is Resurrection Remix. Yay for me. Unfortunately, RR is a conglomerate built from a number of sources including CM. I say "unfortunately", because this means that when something breaks, trying to get someone to care about it enough to fix it is not a trivial exercise. Don't get me wrong: I appreciate all the work the devs involved do (for free), so I can avoid paying for another device (and adding my iota to the e-waste that abounds -- whilst my phone lives, it shall work for me!).

But there are still some issues (including, but perhaps not limited to):
  1. The flashlight function works whilst the screen is on, but switches off when the screen is off );' This is common on most ROMs which support the i9300 and appears to come down to a philosophical choice on the part of CyanogenMod to flip the bird an i9300 users.
  2. The vibration intensity by default is so low that it's pointless. Again, I'm quite sure someone, somewhere is giggling about how absurd this is for i9300 users and muttering something along the lines of "just upgrade to a new device already". Fuck 'em.
As far as (1) goes, this is just accepted as a broken feature of the phone. I've seen quite a few people just throw their arms in the air and say "so what do you expect me to do about it?". For most people, it's probably insignificant -- but I use my phone every night as a flashlight and having to keep the screen on during use is, well, sub-optimal.
As for (2), many ROMs provide a mechanism to set this. RR has one -- but it has often crashed the Settings UI when I try to use it (depending on the build). So I got tired of that. I want updates, but I also want my phone's vibration to shake the very foundations of the universe when someone whatsapp's me an nsfw image.

So here's my little useless collection of shell scripts to fix this:https://github.com/fluffynuts/i9300_hax.git

So far, in there, you can find:
  1. toggle_flash.sh, which toggles the flash as a flashlight.
  2. v11.sh which "set vibration to 11", ala Spinal Tap. This sets your i9300 vibration to max (100%)
Both of these require root as they mess with stuff under /sys. Both are easier to run via $cripter. To use, clone somewhere and copy the scripts into the folder you've configured $cripter to use as a scripts home.

To use, I bind toggle_flash.sh to a double-tap on my home screen, via Nova Launcher (but I'm sure you can do this with other launchers or in other ways) and I set v11.sh to run at boot via Llama.

I hope this can help someone else with an i9300, especially if they are holding back on upgrades for the same reasons I am:
  1. Damn the man: I refuse to pay just to get a newer, shinier toy.
  2. The world has enough e-waste as it is. I'd like to be able to say to my boy someday that I did what I could to slow down the destructive nature of man, no matter how inconsequential my actions were.
 Any other useful hax that I contrive for this phone will end up there. Licensing is BSD ("if it breaks, you can keep both pieces"). You're welcome to use, distribute and mock. I don't care if you don't like it. I do care (a lot) if you do (:

Thursday, 9 July 2015

Callbacks suck

Problem: Javascript is essentially run on one thread and some things take time, blocking other requests.
Solution: make everything asynchronous and provide a callback mechanism.
Problem: I now have plenty of asynchronous calls which come back to a callback which does another async call and so on and so forth. Since I didn't really think this through to start with (!!), I now have something like:


This code is an unmaintainable mess and something has to be done about it.


There are a few ways to slay this beast:

#1 Refactor
You could refactor your code to pull out methods and reference those. Chances are you end up with something a little spaghetti-like: now you have all these neat little functions but actually determining the call order is an exercise in using your IDE's "go to definition" function, or at the very least, VIM's * and # commands. Not ideal, at least not from where I stand. Refactoring is a good process but I don't think it's the right tool to fix this.

#2 Node async module
You could alter your programming pattern and make use of the (rather cool) async node module. Off the top, it looks like it has some interesting solutions to some of the most common logic patterns, but I'm still not 100% sold. There's nothing intrinsically wrong with the module -- it just doesn't seem like the best fit (to me).

#3 Promises (to the rescue, again)
Promises have been around for a while and they're not going away. Indeed, ES6 brings promises right into the core language. If you aren't using them, it's at least time to start looking at them. And if you do, and you happen to stumble across one of the grandfathers of the promise library offering, Q, then you will be pleased to find that Q has support baked in to provide a promise-based wrapper around the typical NodeJS async callback: Q.nfbind, or, as I prefer, the alias: Q.denodeify.

You can just do something like:



var promiseBasedFunction = Q.denodeify(someAsyncFunction);
promiseBasedFunction('arg1.level0', 'arg2.level0').then(function(result) {
// first call succeeds, let's carry on
  return promiseBasedFunction('arg1.level1', 'arg2.level1');
}).then(function(result) {
// second call succeeds, let's forge on forward
  return promiseBasedFunction('arg1.level2', 'arg2.level2');
}).catch(function(error) {
// handle errors
  console.log('FAIL: ' + error);
}).done(function() {
// this is always run
  console.log('Exiting now...');
});
 
 

Some wins you get out of this:
  1. No deep nesting. There is a nice follow-on of functionality, something you can read top-down and which someone else could probably get into a little quicker. Code readability consistently ranks highly when programmers are asked to list aspects of quality code. Remember that the code you expertly write today will be visited by a complete stranger some time in the future. That stranger may even be you.
  2. No need to handle errors at every asynchronous turn. The first one that fails ends up in the catch area and you can deal with it there
  3. A defined end-point that is easy to deal with (the done handler)
But wait, there's more!
The async node module has some interesting techniques for dealing with simultaneous async calls (think: I need to get data from three web services and provide a consolidated answer -- I'd prefer to make those calls in parallel and consolidate once) -- but so does Q, with Q.all(). Q offers a lot, and one of the main benefits it offers is helping to wean you off of the callback tit. As I've shown in my promises tutorial, different promise libraries play together fairly well, so even if you decide to switch to another (or the ES6 built-in Promise), you have to freedom to do so.

Not that I suspect Q is going anywhere any time soon.


Footnote: this journey started when I had the brainfart that it would be great if there were a node module that provided functionality to wrap existing async calls in the promise lingo. I was about to embark on writing one when a friend suggested that I check out some of the functionality in Q that I wasn't already using. That's a reminder to check if a problem has already been solved before running off to write code -- though the process (like the process of writing my own promises implementation) would have probably been enlightening. It's also probably a reminder that it can be quite beneficial to skim over the full documentation for a library to discover useful gems.

Wednesday, 3 June 2015

A little PeanutButter housekeeping

PeanutButter, that collection of libraries that does a whole bunch of stuff, has always been extended with the philosophy that it would be better to have a bunch of small libraries each with only their own dependencies rather than a monolithic installation of a bunch of crap you'd never want.

Well, with the exception of PeanutButter.TestUtils.Generic, which is where interesting test helper code went to lurk and generally be useful. This was where the first incarnation of TempDb was born. It was either Sqlite or SqlCe, I don't remember, but whichever it was, the other was soon to follow, in the same package. So if you wanted a temporary SqlCe database, you ended up getting all of the Sqlite dependencies smooshed into your project. Boo!

I recognised this as sub-optimal, but left it just so, partly because I'm a lazy bugger and partly because I was afraid of fallout for anyone who might upgrade packages and find stuff to be no longer working as it had been moved out into another package.

Not too long ago, I got the itch to add LocalDb support and, of course, it ended up in the same assembly. I was getting less stoked, but remained at the same level of lazy.

Finally, tonight, I got my ass in gear and split these out. So there are 4 new packages:
  • PeanutButter.TempDb
  • PeanutButter.TempDb.Sqlite
  • PeanutButter.TempDb.SqlCe
  • PeanutButter.TempDb.LocalDb
The first contains the generic TempDb<> logic and the remainder are concrete implementations. PeanutButter.TestUtils.Generic no longer provides these classes (and no longer spams your projects with the unused dependencies). Apologies if this causes some "WAT?!" for anyone, but the split was inevitable. As a bonus side-effect, PeanutButter.TestUtils now depends on NUnit, so installing the former will bring in the latter -- which is actually less WAT because of funky things like PropertyAssert.

Anyway, as always, they're totally free. So if they break, you can keep all the pieces.

Thursday, 7 May 2015

PeanutButter TempDb gets LocalDb support

PeanutButter.TestUtils.Generic contains a lot of stuff, including TempDb<>, a generic abstract class which can be used to create temporary database instances and two derivatives, TempDbSqlite and TempDbSqlCe which, as the names might suggest, provide temporary databases of the Sqlite and SqlCe flavours.

Tonight, I added TempDbLocalDb which uses SQL Server's LocalDB mode (as is the default mechanism for MVC web applications. Whilst this does seem to take a little longer to spin up, it means that your tests can now have full SQLServer support with a temporary database which is automatically cleaned up as your tests end.

TempDb<> implements IDisposable, so the natural flow would be something like:

using (var db = new TempDbLocalDb())
{
    var connection = db.CreateConnection();
    connection.Open(); // if you need it open first
    // do stuff with the connection
    // for example, create an Entity context using the DbConnection
}
 
Thanks to Andrew Russell for the idea and pointing me at some resources (http://blog.developers.ba/localdb-for-database-integration-testing-in-asp-net-5-project-and-xunit-net/) which got me going.

You can install via nuget with:

install-package peanutbutter.testutils.generic

Hope this is of use to someone. Whilst it's true that providing a repository layer, for example, and mocking out data get/set operations is a lot faster (and probably better design), there are times when you just have to test against a database engine (for example, when testing that your database migrations work and that your entity models align with your migrated database structures... you are testing that, aren't you?)

Update (2015/06/03):  the TempDb packages have (finally) been split out and swept up so you don't have to install all of the providers you're not using. Check out the post here: http://davydm.blogspot.com/2015/06/a-little-peanutbutter-housekeeping.html

Monday, 4 May 2015

Kata - Part 2: In which speed becomes the primary focus

Pursuant to part 1, this is the story of how the speed-run kind of got out of hand.

So I was chasing this time: 12:44. It was faster than I'd ever been, and... there was that damn smiley-face mocking me.

I sat down and thought about it for a while. The first step of optimising for speed is simply this:

Do less.

So I did -- using the tooling at my fingertips, learning more about what I could get it to do, trusting in the R# completion a bit more (which I used to trust only when I couldn't be bothered to type something out in full or when I wanted to take advantage of the automatic library inclusions) and figuring out how I could write my code with the least keystrokes -- with the proviso that the code still had to be something I liked. That means no single-character variable names or stupid test names. My tests must still describe what they do -- they still have to follow the template

{MethodName}_Given{Input}_Should{ReturnSomething or HaveSomeEffect}

My implementation should still be pleasurable to read. I'm an advocate of the idea that good code doesn't need comments because it already reads like a story, conveying the author's intents to the reader whilst simultaneously allowing the compiler to convey the author's intents to the host machine.

I also still had to follow TDD: write a test, test must fail for the correct reasons, write simplest implementation, test should pass, optionally refactor if I can think of a better way to write it. It's important to note the "optionally" part of this last step: when optimising for speed, I don't want to compromise code quality, so I try to write the "best" code first. I do think that I can write prettier code (and I have, for the same kata), but I have allowed simpler code where that would shave seconds off of the final time.

Of course, the next place you get to after "do less" is quite simply "get faster". Whilst engaging in that cycle though, being aware of ways you could "do less" whilst still maintaining the parameters of the assignment becomes vital: sometimes getting faster at a particular unit of work uncovers how you could do less there (or elsewhere). It's quite a lot like optimising code for execution times.

My speed-run implementations centered around an extension-method approach, which I think provides a level of readability to the final result. Extension methods are, IMO, one of the best features of .NET and I use them in production to make code read better -- code which reads better is easier to maintain and has less ramp-up time for the new developer who invariably is saddled with the troubles of coders past.

I slowly whittled away at my 13:48:
10:42...
09:16...
08:48...
and, this morning:
07:35.

Now bear in mind that Katarai does give me a little time boost here: the system bootstraps a solution with a shell test fixture and a shell implementation which has the Add method throwing NotImplementedException. So if I were to do this bare-bones, I'd expect probably another 30 seconds for creation of those files. In other words, I'd like to recognise that this isn't the time required to write every character of code in the outcome. On the other hand, I also make mistakes in this run, so I could theoretically go faster -- though I'm not sure if there's any point in trying to do so (and I also said that when I did 10:42, so... yeah...).

You can view the kata here: StringKata 2015-05-04, completed in 07:35 if you'd like.

I had to tweak my whole way of doing things to get down to this time, including
  • moving my taskbar to the right so that Katarai wouldn't overlay the area I was working on
  • learning to trust in R#'s code completion more
  • using R#'s CamelCase feature
  • learning the requirements for the kata so intrinsically that, in this particular run, I actually "forgot" where I was in the whole process, but somehow continued to write the next correct test. Weird.
The question, of course, is what the value is of completing a kata in this kind of time. I'd argue that a lot of the value of the kata is lost once you've got to the point of knowing the steps and your counter-argument implementation iterations so well. I'm not thinking about a new way to complete the problem. I'm not critically analysing my thought processes around the problem or TDD. I'm blindly banging it out... Write test!... Run all!... Write code!... Run all!... Write test!... Run all!... Write code...! Lather! Rinse! Repeat!.

I had actually argued that I thought I wasn't gaining anything once I got past around 12 minutes, until a conversation with Chris Ainslie this morning. He raised the same question: "Once you're going that fast, is there anything to gain from the kata?". In talking it out though, I can attest that there is something to gain in a pure speed-run, much like there might be something to gain through placing another arbitrary constraint on the problem, a practice which is sometimes used to stretch the kata or make the process a little more mindful. Constraints that I have tried for a kata include:

  • No mouse (this will get you familiar with your keyboard shortcuts, for sure)
  • Implementation has no variables or state or loops. LINQ is allowed. Logic is allowed, through the terniary operator only. All member methods start with the return keyword (I find the result from this constraint set particularly pleasing and it's the basis for my speed-run result, though being pure on this does take a little longer).
  • Implementation makes use of no LINQ or inbuilt string functions (yes, write your own Split! Just like grandpa used to! *ahem* I mean, just I like I used to, back in the days of C)
By making the constraint the rather open-ended "shortest time possible", I have gleaned the following:
  • Learned more about my tooling (I've mentioned R# and code completion, for example)
  • Practice typing for speed and practice reducing typos when typing at speed!
  • Learn to evaluate a prior test as a possible candidate for a shell for a new test which can be modified and used in a shorter time than writing the test from scratch
  • Enforcing a strict adherence to readable code, even when time is tight
  • Enforcing a strict adherence to TDD, even when time is tight
  • You can always get a little better; always go a little faster. Translated to production code, this makes me more critical of the code that I am writing -- could I have done it more succinctly (but still make it read like a story)? Have I done the smallest thing possible or am I gold-plating? Am I implementing features which aren't (at least not yet) actually required? This last one trips us all up: we all too easily get into a mindset of trying to produce perfect code instead of great code which can be extended on requirement.
At the very least, this video should suppress the argument that I have heard that doing a String Kata in 20 minutes is a ridiculous requirement. If I can do it in 07:35, so can you -- indeed, I bet this is still far from the floor of times that are possible for this kata.

Kata - Part 1: The gauntlet is thrown.

Kata are, according to Wikipedia, foundry of all knowledge (and often of factually correct knowledge) "the detailed choreographed patterns of movements practised either solo or in pairs".

In software circles, there's a similar concept, embodied by the practice of test-driven code exercises, often with the premise of a rather synthetic problem which the user (or pair) is left to solve in the strictest sense of TDD that she/he/they can muster. The point is to provide a problem which isn't necessarily new or tricky, leaving the body of the exercise to the players to practice their TDD cadence:
  • Red: write one test and one test only, which tests a very specific requirement of the specification. Running all tests should give one "Red" result (one test fails, on purpose, for the intended reason).
  • Green: implement the bare minimum code to get the last test to pass. Running all tests should give all "Green" results (all tests pass).
  • Refactor: step in which the practitioner evaluates the code which was implemented to make the test pass and optionally renames, refactors or tidies up. If the code is deemed to be in a good state, the process is repeated.
Performing code katas provides the practitioner the opportunity to practice the strictest sense of TDD as well as examine programming, testing and style habits. The main idea is that the perfection which is sought during the kata will seep into production code. Some supplementary goals are to strengthen the practitioner's familiarity with her tools, hone her skills and encourage thought of alternative paths to the solution which still adhere to the TDD cadence. In addition, a kata can be used as a "jump-start" for the code "juices" as it were: an exercise where the output will be discarded (and hence isn't completely material), but where the practitioner's thought patterns can be aligned with the environment of programming towards a goal in preparation for production work.

At least, I've found it can be a good way to get the coding started.

I work at Chillisoft where one of the products in development at the moment is a kata assistant: Katarai. (Full disclosure: much of the gruesome innards of Katarai are the result of a fevered few tinkering sessions with a colleague, Mark Whitfield, so I kind of have a dog in this race, though I'm not part of the active development team for it).

Katarai aims to address a few issues with the code kata concept:
  • Seasoned developers often can't wrap their heads around the concept of writing code you know you're about to throw away, or the concept of practicing TDD cadence simply for the practice. Katarai provides feedback of your progress through a kata as well as providing (in the future), feedback about your improvements / changes over time (and potentially a competitive flavour in that you could compare times or other metrics you derive from katas with those of your friends).
  • Juniors may find it difficult to get into katas -- the code requirements can be a little esoteric and the TDD cadence hasn't taken hold yet.
  • Code katas aren't necessarily fun by nature -- Katarai tries to change this perception a little.
  • Katarai also times your attempts -- which leads us to where we are with this article. More on that later. 
(and probably others).

I gladly postulate the code katas are beneficial for programmers of any level. You could use them to familiarise yourself with a new language, you can use them just as part of the rhythm of your coding day, you can use them to keep your keyboard and tooling skills tip-top... Basically, you can use them for the same reasons martial artists do:  

"The basic goal of kata is to preserve and transmit proven techniques and to practice self-defence. By practicing in a repetitive manner the learner develops the ability to execute those techniques and movements in a natural, reflex-like manner."

Replace "self-defence" with "the skill you aim to master" and you can see how the concept applies to code (:

Recently, there was a challenge at work to complete the String Kata (yes, I really did just push you off to Google there -- there are too many great links to follow!) in under 17 minutes. The challenge isn't all that foreign to us: a few months back, a requirement was put forth that all developers should be able to complete the String Kata in 20 minutes or less to demonstrate proficiency in their programming environment. The original "20 minutes" requirement could be met in any language and environment of choice. The 17-minute challenge was done as part of internal testing for Katarai, so had to be in C#, using Visual Studio, though addons and extensions which facilitate production are also allowed.

I had already been testing out Katarai (that whole "dog in the race" thing), so I ran a quickie and submitted my time for the competition: 13:48. I thought that was a reasonable time; it was certainly well under the required 17 minutes and one of the fastest times I had recorded. I was quite pleased with myself.

It wasn't long until Peter Wiles replied to the mail thread to the effect that he had completed in 12:44.

The time was a challenge. The little smiley-face at the end of the email was just plain infuriating.

Read more in part 2

Tuesday, 14 April 2015

RequireJS + AngularJS + A different spin

I keep on meaning to post this tutorial here -- it's been on github for a while and I somehow didn't manage to add it here...
Anyways, this was part of a discovery mission for using RequireJS and Angular to build a simple app. Angular because I like it as an MVC framework for Javascript and RequireJS because, really, who wants to maintain script tag spam in your SPA? Not me, not after I had a page with 20 inclusions and realised it was indicative of something horribly wrong.
The method I chose also makes this code easier (for me) to test: Angular has some great mocking and testing functionality, but in the face of Karma+Jasmine, it just seems like far too much effort to figure out another mocking framework -- not to mention that I find the concept of having to $apply to get Angular to run its magic a little counter-intuitive and a little out-of-place in a test (imo).
So I flipped this a little on it's head: the controller module in this project doesn't actually register a controller -- it returns the configuration which can be used to register a controller -- which is consumed by app.js so that routing can work with the configured controller. This frees me up to test the controller in a simpler way: test that the array has the correct dependencies, in the correct order (typically, just '$scope') and then test that the scope passed in is modified in the expected way (properties and methods added and the methods work as expected). I must point out the obvious here: that I'm not inclined to use Angular factories and services -- but who needs them if you have RequireJS and go the prototypical route?
But enough blah-blah. If nothing else, this tutorial can demonstrate one way to do it. I've also made JQuery private so that the only library in the global scope is Angular -- and this was mainly so that angular-route would load up correctly. There's probably a better way do to this too (I've read about AngularAMD and some others; just haven't play-tested them), but I was satisfied enough with the final result -- and it might prove at least a little interesting for some (:

Tuesday, 31 March 2015

Update to promises tutorial

For anyone who is interested, my promises tutorial has been updated to include info and examples about the native ES6 Promise prototype. You can get some goodness here: http://davydm.blogspot.com/2014/10/javascript-and-promises.html

Wednesday, 21 January 2015

EntityFramework: getting to the bottom of a murky error

In doing some work using EF which talks to existing databases on MSSQL in production and SQLCE for testing (check out https://github.com/fluffynuts/OrmNomNom.git for an example of using SQLCE and a transient database to do Entity testing -- there's also a bit in there about using NHibernate too), I used the Entity POCO generation tools to quickly spew out the classes that should work when talking to the existing database. I also wrote some FluentMigrator migrations so I could get a temporary database up to speed and set off to use that method to test some database interaction, when:
Boom! InvalidOperationException ("Sequence contains no elements").
Now the befuddling aspect of this exception is where it was thrown. I basically have some code like:
using (var ctx = new DragonContext(db.CreateConnection()))
{
    var unit = new SupplierPriceList() 
    { 
        SupplierPriceListID = "test", 
        SupplierID = "foo", 
        MaterialID = "foo", 
        CompanyCode = "foo", 
        Date = DateTime.Now, 
        SupplierPrice = 1.23
    };
    ctx.SupplierPriceLists.Add(unit);
    ctx.SaveChanges();
}
And the exception is thrown at the point of the call to Add(). Which doesn't seem to make a lot of sense as the temp database being used has no rows in any tables, so of course SupplierPriceLists is empty and one might expect that to be the source of the exception, considering the text, but it really shouldn't matter. Tables are empty all the time and we can add rows to them. Still, the error befuddles...
Looking at the stack trace, I see the last two frames are:
   at System.Linq.Enumerable.Single[TSource](IEnumerable`1 source, Func`2 predicate)
   at System.Data.Entity.Utilities.DbProviderManifestExtensions.GetStoreTypeFromName(DbProviderManifest providerManifest, String name)
Ok, so the last frame explains the raw reason for the exception: something is doing a Single() call on an empty collection. But what? The call one up looks interesting: GetStoreTypeFromName seems to suggest that Entity is doing a lookup to find out how it should store the data for that property. So, on a hunch, comment out all properties except the implied key field (SupplierPriceListID) and the Add() call works. Hm. Perhaps SQLCE doesn't like the
double?
fields? Nope -- uncomment them and things still work.
Then the culprit leaps out. The generator tool has annotated a DateTime? field with:
[Column(TypeName = "date")]
Removing the annotation causes the error to subside -- and the SaveChanges() call on the context passes, on both MSSQL and SQLCE
I thought I'd just make a record of this here as I found nothing particularly useful through much googling and eventually found and resolved the problem based on a hunch. Perhaps this post can save someone else a little mission. Of course the DateTime? value is truncated for the Date field, but hey, that's what the client originally intended...

What's new in PeanutButter?

Retrieving the post... Please hold. If the post doesn't load properly, you can check it out here: https://github.com/fluffynuts/blog/...