Sunday, 15 April 2018

# What's in PeanutButter.Utils, part 2

## Metadata extensions

I just wanted to chip away at my promise to explain more of the bits in PB, so I thought I'd pick a little one (though I've found it to be quite useful): metadata extensions.

At some point, I wanted to be able to attach some arbitrary information to an object which I didn't want to extend or wrap and which some code, far down the line, would want to read. If C# was Javascript, I would have just tacked on a property:
```js
someObject.__whatDidTheCowSay = "moo";
```

But C# is _not_ Javascript. I could have maintained some global `IDictionary` somewhere, but, even though I wanted it to support a feature in [NExpect](https://github.com/fluffynuts/NExpect), where the code wouldn't have a running lifetime of any significance, it still felt like a bad idea to keep hard references to things within NExpect. The code associating the metadata has no idea of when that metadata won't be necessary any more -- and neither does the consumer.

Then I came across [`ConditionalWeakTable`](https://msdn.microsoft.com/en-us/library/dd287757(v=vs.110).aspx) which looked very interesting: it's a way of storing data where the keys are weak references to the original objects, meaning that if the original objects are ready to GC, they can be collected and the weak reference just dies. In other words, I found a way to store arbitrary data referencing some parent object and the arbitrary data would only be held in memory until the end of the lifetime of the original object.

That's exactly what I needed.

So was born the [`MetadataExtensions`](https://github.com/fluffynuts/PeanutButter/blob/master/source/Utils/PeanutButter.Utils/MetadataExtensions.cs) class, which provides the following extension methods on all objects:

- `SetMetadata`<`T`>`(string key, object value)`
- `GetMetadata`<`T`>`(string key)`
- `HasMetadata`<`T`>`(string key)`

which we can use as follows:

```csharp
public void MethodWantingToStoreMetadata(
  ISomeType objectWeWantToStoreStateAgainst)
{
  objectWeWantToStorStateAgainst
    .SetMetadata("__whatDidTheCowSay", "moo");
}

// erstwhile, elsewhere:

public void DoSomethingInterestionIfNecessary(
  ISomeType objectWhichMightHaveMetadata)
{
  if (objectWhichMightHaveSomeMetadata
        .HasMetadata("_whatDidTheCowSay"))
  {
    var theCowSaid = objectWhichMightHaveSomeMetadata
                       .GetMetadata("_whatDidTheCowSay");
    if (theCowSaid == "moo")
    {
      Console.WriteLine("The cow is insightful.");
    } 
    else if (theCowSaid == "woof")
    {
      Console.WriteLine("That ain't no cow, son.");
    }
  }
}
```

And, of course, as soon as the associated object can be collected by the garbage collector (remembering that the reference to this object, maintained within PB, is _weak_), that object is collected and the associated metadata (if not referenced elsewhere, of course) is also freed up. This mechanism has facilitated some interesting behavior in `NExpect`, and I hope that it can be helpful to others too.

# Markdown all things

Whilst blogger.com provides a fairly good blogging platform for regular writing, I've found that it's rather painful for technical blogging. In particular, code blocks are a mission. In the past, I've wrapped code in &lt;pre&gt;&lt;code> ... &lt;/code&gt;&lt;/pre&gt; and let [highlight.js](https://github.com/isagalaev/highlight.js/) do all the heavy lifting of making that actually look readable. HighlightJs has been fantastic at that, but it still hasn't been as smooth a process as I would have liked _overall_. I still tended to write the non-code parts in the WYSIWIG html editor, and had to switch the source view to work on code parts. 

When I blog, I literally want to get out the information as quickly as possible, in a readable format. I'm not here to fight with styling.

So I was quite happy to stumble across [showdown](https://github.com/showdownjs/showdown). A little Javascript in my template and suddenly I could write in possibly the simplest format ever: markdown. I had quick and easy access to simple styling elements (lists, headings, etc) as well as code blocks. All good, but not automagick out of the box. 

I thought to myself, _"I'm sure I can't be the only person who wants this"_, and _"It would be nice if that auto-bootstrapping of markdown+code could be done anywhere, not just from within my blogger template"_.

So, as is so common within the open-source world, I stand upon the very tall, very broad shoulders of [highlight.js](https://github.com/isagalaev/highlight.js/) and [showdown](https://github.com/showdownjs/showdown) to present [auto-markdown](https://github.com/fluffynuts/auto-markdown): a script you can include on any page to convert any element with the `markdown` class to be rendered as markdown. It can even be configured (script versions and code theme) via some global variables, so you don't have to fiddle with the code if you don't want to.

I trialed it with my last post and It's how I'm writing now -- I just add a shell `pre` tag with the `markdown` class and get on with the writing, without any more fighting with the html editor. As a bonus: even if the script fails for some reason (such as if the user has Javascript disabled or GitHub doesn't supply my script in time), the blog is still in a readable format: markdown.

If you're interested, follow the instructions in the [README.md](https://github.com/fluffynuts/auto-markdown). Feel free to open issues if you encounter some - for instance, I encountered some stickiness with generics in code blocks. Also, bear in mind that markdown requires html-escaping for chevrons (ie, embedding xml). 

Feel free to share it as much as you like. If you don't feel comfortable referencing my code directly, fork my repo and keep your own copy (:

Now, if only blogger's html editor had a vi mode...

Thursday, 12 April 2018

# What's in `PeanutButter.Utils`, exactly?

`PeanutButter.Utils` is a package which pretty-much evolved as I had common problems that I was solving day-to-day. People joining a team that I was working on would be exposed to bits of it and, like a virus, those bits would propagate across other code-bases. Some people asked for documentation, which I answered with a middle-ground of `xmldoc`, which most agreed was good enough. People around me got to know of the more useful bits in `PeanutButter.Utils` or would ask me questions like "Does PeanutButter.Utils have something which can do [X]?". I kind of took the ubiquity amongst my team-mates for granted.

Fast-forward a little bit, and I've moved on to another company, where people don't know anything about the time-savers in `PeanutButter.Utils` -- and it occurs to me that that statement probably applies to pretty-much most people -- so I thought it might be worthwhile to have some kind of primer on what you can expect to find in there. An introduction, if you will. I think there's enough to break the content down into sections, so we can start with:

## Disposables
One of the patterns I like most in the .net world is that of `IDisposable`. It's a neat way to ensure that something happens at the end of a block of code irrespective of what goes on _inside_ that code. The code could throw or return early -- it doesn't matter: whatever happens in the `Dispose` method of the `IDisposable` declared at the top of a `using` block will be run. Usually, we use this for clearing up managed resources (eg on database connections), but it struck me that there were some other convenient places to use it. Most generically, if you wanted to run something simple at the start of a block of code and run something else at the end (think of toggling something on for the duration of a block of code), you could use the convienient `AutoResetter` class:
```csharp
using (new AutoResetter(
    () => ToggleFeatureOn(), 
    () => ToggleFeatureOff()))
{
    // code inside here has the feature toggled on
}
// code over here doesn't -- and the feature is 
//    toggled back off again even if the code 
//    above throws an exception.
```
It's very simple -- but it means that you can get the functionality of an `IDisposable` by writing two little lambda methods.

You can also have a variant where the result from the first lambda is fed into the second:
```csharp
using (new AutoResetter(
    () => GetCounterAndResetToZero(),
    originalCount => ResetCounterTo(originalCount)))
{
// counter is zero here
}
// counter is reset to original value here
```

Cool.

Other common problems that can be solved with `IDisposable` are:
### Ensuring mutexes / semaphores are reset, even if an exception is encountered
For this, we can use `AutoLocker`:
```csharp
using (new AutoLocker(someMutex))
{
}
using (new AutoLocker(someSemaphore))
{
}
using (new AutoLocker(someSemaphoreLite))
{
}
```

### Temporary files in tests
```csharp
using (var tempFile = new AutoTempFile())
{
   File.WriteAllBytes(
       Encoding.UTF8.GetBytes("moo, said the cow"),
       tempFile.Path
   );
   // we can run testing code against the file here
}
// file is gone here, like magick!
```
This uses the `Path.GetTempFileName()` system call by default -- so you don't have to care about where the file actually exists. Of course, there are constructor overloads to:
- create the file populated with data (string or bytes)
- create the file in a different location (not the system temp data location)
- create the file with a specific name

`AutoTempFile` also exposes the files contents via properties:
- `StringData` for string contents
- `BinaryData` for a byte[] array

There is also an `AutoTempFolder` if you want a scratch area to work in for a period of time. When it is disposed, it and all it's contents are deleted.

Similarly, `AutoDeleter` is an `IDisposable` which can keep track of multiple files you'd like to delete when it is disposed:
```csharp
using (var deleter = new AutoDeleter())
{
    // some files are created, then we can do:
    deleter.Add("C:\Some\File");
    deleter.Add("C:\Some\Other\File");
}
// and here, those files are deleted. If they can't be 
//    deleted, (eg they are locked by some process),
//    then the error is quietly suppressed. 
//    `AutoDeleter` works for folders too.
```

### Other disposables
As much as I love the `using` pattern, it can lead to some "arrow code", like this venerable ADO.NET code:
```csharp
using (var conn = CreateDbConnection())
{
    conn.Open();
    using (var cmd = conn.CreateCommand())
    {
        cmd.CommandText = "select * from users";
        using (var reader = cmd.ExecuteReader())
        {
            // read from the reader here
        }
    }
}
```
Sure, many people don't use ADO.NET "raw" like this any more -- it's just an easy example which comes to mind. I've seen far worse "nest-denting" of `using` blocks too.

This can be flattened out a bit with `AutoDisposer`:
```csharp
using (var disposer = new AutoDisposer())
{
    var conn = disposer.Add(CreateDbConnection());
    conn.Open();
    var cmd = disposer.Add(conn.CreateCommand());
    cmd.CommandText = "select * from users";
    var reader = disposer.Add(conn.ExecuteReader());
    // read from the db here
}
// reader is disposed
// cmd is disposed
// conn is disposed
```
`AutoDisposer` disposes of items in reverse-order in case of any disposing dependencies.


So that's part 1 of "What's in `PeanutButter.Utils`?". There are other interesting bits, like:
- extension methods to
    - make some operations more convenient
         - do conversions
         - working with `Stream` objects
         - DateTime utilities
    - facilitate more functional code (eg `.ForEach` for collections)
    - `SelectAsync` and `WhereAsync` let you use async lambdas in your LINQ
    - test and manipulate strings
- the `DeepEqualityTester`, which is at the heart of `NExpect`s `.Deep` and `.Intersection` equality testing
- `MemberExpression` helpers
- Reflection tidbits
- reading and writing arbitrary metadata for any object you encounter (think like adding property data to any object)
- Some pythonic methods (`Range` and a (imo) more useful `Zip` than the one bundled in LINQ)
- dictionaries
    - `DictionaryWrappingObject` lets you treat any object like you would in Javascript, with text property indexes
    - `DefaultDictionary` returns default values for unknown keys
    - `MergeDictionary` allows layering multiple dictionaries into one "view"
    - `CaseWarpingDictionary` provides a decorator dictionary for when the dictionary you have does indexing with inconvenient case rules

I hope to tackle these in individual posts (:

Saturday, 7 April 2018

Toggle All Things!

(Note: this post was started quite a long time ago and has lived in draft mode whilst I intended to provide a concrete code example. I'm not sure when, if ever, that will come, but I think that the lessons learned are still valuable to share)

It's not an uncommon workflow: the client / company wants some features in a certain sprint and the features have to be signed off by a QA / testing team. So if we take the example of a two week sprint, what often happens is that the testers sit idle (or are tasked onto a different project) for the first week, while development happens at a furious pace, then testers are given domain to test the week's work and we enter a development freeze, where developers are unable to work ahead for fear of interrupting testing and must be on standby for any bugs picked up by QA.

Hopefully, if you're testing your own work proficiently, QA picks up nothing, or perhaps some small stuff like a misaligned label or similar. However, for that time period, you're stagnant -- which causes you to work harder when the first week of a sprint comes around again. It's not a healthy place to be in, if, for no other reason, than because it provides ammunition for the "production line" mindset and guilting you into working longer hours during the "productive" part of the sprint. And if you were on top of things, the business is also losing during the development freeze because you aren't free to forge ahead with new stories.

No-one is winning.

One possible strategy is to adopt a "feature branch per developer" strategy with a regular "merge day" where some poor soul attempts to merge the disparate work of the last two weeks. You can minimise the pain with more frequent merges during the active development phase, but whilst you're in development freeze, you're going to drift and merge hell is eventually inevitable.

One of the most useful strategies we've used to date is that of feature toggles, which, simply put, are mechanisms for turning parts of your software on or off. So here are some thoughts on that particular topic, played out over the development cycle of a SPA web application.

1. A simplistic approach: compile-time toggles (ie: releases with feature-sets)

One method is good old #if / #ifdef and compile-time constants. Bake those features in and release a version with a specified feature matrix. This works well for software where the required featureset is unlikely to change, such as for Gentoo packages, where your USE flags determine the features available to you at run-time until such time as you decide that you need more or fewer features. As the owner and maintainer of said Gentoo system, you decide what you want from your software and you compile it that way. When your requirements change, you update your flags and re-compile.

This works well when the user / tester is the developer. But not as well when the user/tester is someone else -- because then they have to contact the developer and request a rebuild and redeployment every time a feature needs to be turned on or off.

It's not an invalid strategy -- it's just not very agile. Additionally, you can't easily a automate testing (unit / integration testing) against features, because doing so requires changing compile parameters.

2. Better: static values in the code

Another version of this is compiling with static / constant values and branching in code. This allows testing against the different features (especially if you're using static values instead of constants). This alleviates the automated testing conundrum but still leaves the human testing department out in the cold. They will still contact you for a new build containing the required feature matrix.

Rebuilding and redeploying to satisfy a required feature matrix becomes impractical the more features you enable and the more agile you'd like to be. Suppose you think feature [X] is ready, but 1/2 an hour before sign-off, testing finds a defect. Now you have to rush to provide a deployment package without [X]. In addition, testing may not yet have had the time to prove that there are no unintended side-effects from not including feature [X].

3. Even better: run-time toggles determined by app / web configuration

By now you've realised that the toggles should be configurable and they have been pushed them into your web config (for example). Great! Toggles can be changed fairly quickly, without requiring a re-deployment. The problem is that you still need someone fairly technical (and who has access to the staging machines) to update the enabled feature-set.

My experience has been that if you can make the toggling of features relatively quick and painless, you can offload the decision on which features are in an accepted release to someone in testing or management. You don't want to waste your time and resources on fighting for feature inclusion / exclusion -- you want to focus on making new things, solving new problems! So whilst feature toggles in your web config are better than compile-time, you could push this just a little further for a lot of win.

4. Best: run-time toggles determined per user interaction

In the case of a web system, this would mean "per web request". Imagine if the testers had the ability to test just the stuff they were confident in, and, if they had time, could toggle on a beta feature and test if they have capacity. Imagine if, when a flaw is found in the implementation or design of a feature, it didn't have to hold up the entire release -- just toggle it off, continue testing other features and sign off what is available.

Our implementation was system-wide, re-evaluated per web request, but you could even push this further with per-user configuration included in request headers. Practically, this may be more effort than it's worth.

Features of good feature toggles

  • It should be easy to toggle features and difficult to end up with a set of conflicting behaviors from the application. 
  • It should be easy to determine the feature-set state of the application and difficult to be confused about interactions which cause experienced defects. I'm suggesting that if you have old feature [X] and new feature [Y], which replaces [X], there shouldn't be two toggles ("enable [X]" and "enable [Y]") -- whilst this technically solves the problem, you're expecting testers to understand the repercussions of the ability of enabling both. Instead, you should strive towards well-defined feature toggles which explain what they do and provide only two well-known states (the "off" and "on" states) which cannot be confused by interaction with other parts of the system.
  • It should be as frictionless as possible to develop against (adding features and consuming them) to encourage using the framework so that the agility of feature toggles is experienced as often as possible. More toggles are better than fewer toggles, especially for non-inter-dependent features.
  • Toggles should be easy to remove as well: once QA is satisfied with a feature, the toggle should be removed along with any branches of logic which disabled the feature.
This can all be achieved, and it's really not that tricky. Here are the steps we went through:

A. First pass: development freeze and the rush around sign-off time sucks. We need to be able to add new features which can be easily disabled!

So, first off, we define a class / interface which has boolean toggles on it. And we get the web app to put persist into our storage (sql, document db, whatever) on startup, adding toggles which are missing. We save three versions of this configuration, for three proposed feature-sets: "development", "staging", "production" and we let annotations on the configuration class determine default toggle states for features.

For example, we create a feature toggle with the feature defaulted on for "development" and off for "staging" and "production". In this way, developers on the team default to seeing the current work (and interacting with it) and the feature isn't inadvertently introduced to QA or the production servers. Of course, the moment QA is ready, they can manually enable the feature. Production only gets the feature enabled when a deployment to production is explicitly made with a script to enable the feature.

We allow selection of a feature set ("development", "staging", "production") in the web config, because this is unlikely to change on a deployment target, so probably won't need human interaction (let deployment scripts take care of this) and surface these toggles to the user via a simple UI. The UI itself is toggled with a feature toggle, so the production site doesn't have it at all -- no way for a user to enable a feature they shouldn't or disable a feature which should be there.

This process also makes adding / removing feature toggles trivial for other developers on the system.

Consuming toggles, server-side (back-end developer):
We could make the decision about defined feature classes to inject by manipulating the IoC container -- inject implementation "A" of interface "I" or, if the toggle is enabled, inject implementation "B". This may work with differing degrees of efficiency according to your IoC container and you can end up with two implementations which are incredibly similar, but which differ only in one minor aspect.
Alternatively, components of the system could, perhaps, request feature toggle configuration via IoC injection branch subtly as needed. We make sure that anything which needs feature toggles has a per-web-request lifetime (as does the feature toggles injectible) so that the web app doesn't have to be restarted to bring a toggle into effect

Consuming toggles, client-side (front-end developer):
The current feature toggles are exposed as a calculated stylesheet where disabled features literally have their display set to "none !important". So when the page loads, incomplete features are not available for interaction. On feature toggle, this stylesheet can be reloaded to get instant UI feedback of toggled features without even a page reload.

Toggling toggles (user):
Initially, toggling a feature simply toggles the boolean in storage and re-requests the featureset stylesheet, immediately updating the UI. Calls to the api get a new instance of the feature toggles entity and can change their behavior accordingly.

Typically, a "big hammer" approach works well: a controller action returns null or an empty collection when the toggle is off or calls into defined logic to produce a calculated result when the feature is enabled. Sometimes, a finer chisel is called into play -- some component further down the logic chain subtly changes behavior based on the toggle.

B. New features may replace older ones -- we need a toggle to turn one bit on and simultaneously turn another off

We already have this capability at the server -- we can branch code according to an injected feature toggles matrix configuration object -- but at the client, we are just hiding UI elements which expose disabled features.
Thus we added in more client-side logic: we expose a Javascript blob in a generated js file with the feature toggle matrix and emit an event when this is re-downloaded.
To ease developer consumption, a framework function is provided to tuck away the complexity of listening for the correct event.
Also, whilst we've already had calculated stylesheets to toggle incomplete features off, we can now add in calculated stylesheets to toggle fallback features (ie, legacy feature) back on again.

Winning at features

The net result of employing this tactic has meant that we've managed to pull out far in front of the testing team which used to drive deadlines. Simply put, we're able to work on sprint [N+1] whilst testing is signing off sprint [N], such that we've essentially ended up a sprint ahead and had the breathing-room to address technical debt -- a win for everyone!

Feature toggles provided:
  • A way to alleviate stress for testers and developers: if a feature isn't deemed finished by sign-off time, it's simply toggled out for deployment
  • A way to alleviate stress for developers: whilst features for this sprint are under human testing, developers don't have to sit idle -- they can move forward with the next set of requirements.
  • A way to alleviate stress for project managers: it's far easier to say "we managed to get 4/5 of the required features deployed" than "we couldn't deploy because one feature couldn't be signed off"
Interactive, per-request feature toggles meant that:
  • Testers could test at their own pace, enabling features on their environment when they were ready and disabling if they thought that those features were interacting negatively with others or if they considered them incomplete at sign-off time for the sprint.
  • Testers can preview a feature during development to give early feedback to the developers if they have capacity
  • "What-if" conversations could be had: "what if feature [X] was enabled for the users without [Y], which appears to be a little dodgy right now?"
  • There was no rush at deployment time for getting "just the right build" to deploy: simply deploy the latest and toggle off the features which aren't considered fully-baked.



Friday, 23 February 2018

NExpect hits 100 releases!

It's probably not such a big deal, but it feels like it to me: NExpect (GitHub, Nuget) has hit its 100th release! Whilst the reason for the 100th release was trivial (adding support to assert the existence of keys by any case in a case-insensitive dictionary), it feels like some kind of milestone.

Of course, it wouldn't be here without the fantastic contributions from Cobus Smit as well as the input from everyone who uses it (even the developers on my team who had it foisted upon them!).

NExpect isn't done with its evolution though. I have a trello board that I probably should convert to GitHub issues. And I welcome bug reports and requests which align with the ethos of the project:

  • Expect(NExpect).To.Be.Readable();
    • Because code is for co-workers, not compilers. And your tests are part of your documentation.
  • Expect(NExpect).To.Be.Expressive();
    • Because the intent of a test should be easy to understand. The reader can delve into the details when she cares to.
  • Expect(NExpect).To.Be.Extensible();
    • Because I can't predict every use-case. I believe that your assertions framework should enable expressive, readable tests through extension.
Personally, I'm finding NExpect to be a pleasure to use. But perhaps I'm a little biased (:

Monday, 6 November 2017

What's new in PeanutButter

I realise that it's been a while (again) since I've posted an update about new things in PeanutButter (GitHub, Nuget). I've been slack!

Here are the more interesting changes:
  1. PeanutButter.Utils now has a netstandard2.0 target in the package. So you can use those tasty, tasty utils from netcore2.0
  2. This facilitated adding a netstandard2.0 target for PeanutButter.RandomGenerators -- so now you can use the GenericBuilder and RandomValueGen, even on complex types. Yay! Spend more time writing interesting code and less time thinking of a name on your Person object (:
  3. Many fixes to DeepClone(), the extension method for objects, which gives you back a deep-cloned version of the original, including support for collections and cloning the object's actual type, not the type inferred by the generic usage - so a downcast object is properly cloned now.
  4. DeepEqualityTester can be used to test shapes of objects, without caring about actual values -- simply set OnlyCompareShape to true. Combined with setting FailOnMissingProperties to false, a deep equality test between an instance of your model and an instance of an api model can let you know if your models have at least the required fields to satisfy the target api.
  5. Random object generation always could use NSubstitute to generate instances of interfaces, but now also supports using PeanutButter.DuckTyping's Create.InstanceOf<T>() to create your random instances. Both strategies are determined at runtime from available assemblies -- there are no hard dependencies on NSubstitute or PeanutButter.DuckTyping.
  6. More async IEnumerable extensions (which allow you to await on a Linq query with async lambdas -- but bear in mind that these won't be translated to, for example, Linq-to-Sql code to run at the server-side):
    1. SelectAsync
    2. ToArrayAsync
    3. WhereAsync
    4. AggregateAsync
There was also some cleanup (I defaulted R# to show warnings, not just errors -- and squashed around 2000 of them) and made all projects depend on the newer PackageReferences mechanism for csproj files.

So, yeah, stuff happened (:

Sunday, 8 October 2017

Everything sucks. And that's OK.

(Edit: whilst the physical writing of this article was prompted by frustrating interactions with another person, the primary goal is not, in any way, an attack on that person. On the contrary, I was just reminded that I would have learned so much more, so much quicker if I had adopted this mindset a decade-and-a-half ago -- and I guess that my frustration is largely my own fault: I thought that the people around me with higher qualifications and longer time in the industry had already accepted what appears to be quite obvious to me now. I'm also well aware that my communication skills, especially when delivering unfavorable opinions, could do with much improvement. It's a journey.)

There is no perfect code, no perfect language, no perfect framework or methodology. Everything is, in some way, flawed.

This realization can be liberating -- when you accept that everything is flawed in some way, it puts aside petty arguments about the best language, editor, IDE, framework, database -- whatever some zealot is pedaling as being the ultimate solution to your problems. It also illuminates the need for all of the different options out there -- new programming languages are born sometimes on a lark, but often because the creator wanted to express themselves and their logical intent better than they could with the languages that they knew. Or perhaps they wanted to remove some of the complexity associated with common tasks (such as memory allocation and freeing or asynchronous logic). Whatever the reason, there is (usually) a valid one.

The same can be said for frameworks, database, libraries -- you name it. Yes, even the precious pearls that you've hand-crafted into existence. 

In fact, especially those.

We can be blind to the imperfections in our own creations, sometimes it's just ego in the way. Sometimes it's just a blind spot. Sometimes it's because we tie our own self-value to the things we create.

"You are not your job, you're not how much money you have in the bank. You are not the car you drive. You're not the contents of your wallet. You are not your fucking khakis."

For the few that didn't recognize the above, it was uttered by Tyler Durden from the cult classic Fight Club. There are many highlights one could pick from that movie, but this one has been on my mind recently. It refers to how people define themselves by their possessions, finding their identity in material items which are, ultimately, transient.

Much like how we're transient, ultimately unimportant in the grand scale of space and time that surrounds us. Like our possessions, we're just star stuff, on loan from the universe whilst we experience a tiny fraction of it.

I'd like to add another item to the list above:


You are not the code you have written.



This may be difficult to digest. It may stick in your throat, but some freedom comes in accepting this.

As a developer, you have to be continually learning, continually improving, just to remain marginally relevant in the vast expanse of languages, technologies, frameworks, companies and problems in the virtual sea in which our tiny domains float. If you're not learning, you're left behind the curve. Stagnation is a slow death which can be escaped by shaking up your entire world (eg forcing learning by hopping to a new company) or embraced by simply accepting it as your fate as you move into management, doomed to observe others actually living out their passions as they create new things and you report on it. From creator to observer. You may be able to balance this for a while, you may even have the satisfaction of moving the pieces on the board, but you've given up something of yourself, some part of your creator spirit. You can still learn here -- but you can also survive just fine as you are.

Or at least that's how it looks from the outside. And that's pretty-much how I've heard it described from the inside. I wouldn't know, personally. I'm too afraid to try. I like making things.

This journey of constant learning and improvement probably applies to other professions, especially those requiring some level of creativity and craftsmanship from the member. You either evolve to continue creating or your fade away into obscurity.

And if you are passionate about what you're doing, if you are continually learning, continually hungry to be better at what you do, continually looking for ways to evolve your thought processes and code, then inevitably, you have to look back on your creations of the past and feel...

Displeased.

Often I can add other emotions to the mix: embarrassed, appalled, even loathing. But at the very least, looking back on something you've created, you should be able to see how much better you'd be able to do it now. This doesn't mean that your past creations have no value -- especially if they are actually useful and in use. It just means that a natural part of continual evolution is the realization that everything you've ever done, everything you ever will do, given enough distance of time, upon reflection, sucks.

It starts when you recognize that code you wrote a decade ago sucks. It grows as you realize that code you wrote 5 years, even 2 years ago sucks. It crescendos as you realize that the code you wrote 6 months ago sucks -- indeed, even the code you wrote a fortnight ago could be done better with what you've learned in the last sprint. 

It's not a bad thing. The realization allows you to divorce your self-worth from your creations. If anything, you could glean some of your identity from the improvements you've been able to make throughout your career. Because, if you realize that your past creations, in some way or another, all suck, if you realize that this truth will come to pass for your current and future creations, then you have to also come to conclusion that realizing deficiencies in your past accomplishments highlights your own personal evolution.

If you look back over your past code and you don't feel some kind of disappointment, if you can't point out the flaws in your prior works, then I'd have to conclude that you're either stagnating or you're deluding yourself -- perhaps out of pride, perhaps because you've attached your self-worth to the creations you've made. Neither is a good place to be -- you're either becoming irrelevant or you're unteachable and you will become irrelevant.

If you can come to accept this as truth, it also means that you can accept criticism with valid reasoning as an opportunity to learn instead of an attack on your character. 

I watched a video where the speaker posits that code has two audiences: the compiler and your co-workers. The compiler doesn't care about style or readability. The compiler simply cares about syntactical correctness. I've tended to take this a little further with the mantra:


Code is for co-workers, not compilers.



I can't claim to be the original author -- but I also can't remember where I read it. In this case, it does boil down to a simple truth: if your code is difficult for someone else to extend and maintain, it may not be the sparkling gem you think it is. If a junior tasked with updating the code gets lost, calls on a senior for help and that senior can't grok your code -- it's not great. If that senior points that out, then they are doing their job as the primary audience of that code. This is not a place for conflict -- it is a place for learning. Yes, sometimes code is just complex. Sometimes language or framework features are not obvious to an outside programmer looking in. But it's unusual to find complex code which can't be made understandable. Einstein said:

“If you can't explain it to a six year old, you don't understand it yourself.” 

And I'd say this extends to your code -- if your co-workers can't extend it, learn of the (potentially complex) domain, work with the code you've made, then you're missing a fundamental reason for the code to exist: explaining the domain to others through modelling it and solving problems within it.

Working with people who haven't accepted this is difficult -- you can't point out flaws that need to be fixed or, in extreme cases, even fix them without inciting their wrath. You end up having to guerilla-code to resolve code issues or just bite your tongue as you quietly sweep up after them. Or worse -- working around the deficiencies in their code because they insist you depend on it whilst simultaneously denying you the facility to better it.

People like this get easily offended and may use whatever power is available to them to trip you up -- after all, anything you've said about their code or done to improve it is, from their viewpoint, a personal attack. Expect them to get personal with you too, perhaps even publically. At some point, you begin fear that you might have to actually work with them or on something they've worked on because you just know that the drama will come.

There's not a lot you can do about it and the only solace you can find is that you know that they are fading away into irrelevance -- and hopefully you aren't. Also, at some point, we probably all felt the pang when someone pointed out a flaw in our code. Hopefully, as we get older and wiser, this falls away. Personally, I think that divorcing your self-image from your creations, allowing yourself to be critical of the things you've made -- this is one of the marks of maturity that defines the difference between senior developer and junior developer. Not to say that a junior can't master this already -- more to say that I question the "seniority" of a senior who can't do this. It's just one of the skills you need to progress. Like typing or learning new languages.

All of this isn't to say that you can't take pride in your work or that there's no point trying to do your best. We're on this roundabout trying to learn more, to evolve, to be better. You can only get better from a place where you were worse. You can also feel pride in the good parts of what you've created, as long as that is tempered by a realistic, open view on the not-so-good parts. 

You may even like something you've made, for a while, at least. I'm currently in a bit of a honeymoon phase with NExpect (GitHub, Nuget): it's letting me express myself better in tests, it's providing value to handful of other people -- but this too shall pass. At some point, I'm going to look at the code and wonder what I was thinking. I'm going to see a more elegant solution, and I'm going to see the inadequacies of the code. Indeed, I've already experienced this in part -- but it's been overshadowed by the positive stuff that I've experienced, so I'm not quite at the loathing state yet.

You are not your fucking code. When it's faults become obvious, have the grace to learn instead of being offended. 

# What's in PeanutButter.Utils, part 2 ## Metadata extensions I just wanted to chip away at my promise to explain more of the bits in...