Friday, 24 February 2017

Polymer: an approach for maximising testability

Recently I've been working with the Polymer framework at a client. Whilst I'm not by any means ready to evangelise for the framework, I can see some reasons why people might use and adopt it.

This post is not about why you should though. It's really just to share my learnings and process in the hopes that something in there might be useful.

Technologies which I've ended up using are:
  • Polymer
  • Typescript
  • Gulp
This is not because any of them are definitely the best. Polymer is what the client is already using. Typescript has some advantages which I'll outline below. I tried first with Webpack instead of Gulp for build but didn't have much joy with getting Vulcanize to work with that. I tried two plugins -- neither got me all the way there so I gave up. I'll try again another time, particularly because Webpack would allow me to easily use proper Typescript imports instead of the <reference /> method and partially because of the great dev server and on-the-fly transpilation of just the stuff that changed during dev. 
What we have still works, so it's Good Enough™ and we can start solving the actual problem we were trying to solve with code, now that we have a working strategy for build/test/distribute.

1. Polymer

What is it?
Polymer is Yet Another Javascript Framework for the front-end, offering a way to create re-usable components for your website ("webcomponents"). Proponents of Polymer will tell you that it's "the next standard" or that "this is how the web will be in the foreseeable future". You'll hear about how styles can't bleed out of one component into another or how simple it is to write self-contained components. While there's a lot of merit in those last two, the first part hinges on all browsers supporting the proposed feature of HTML imports, which work with varying degrees of success across different browsers and some vendors are even expressing doubt about the proposal (http://caniuse.com/#search=html%20imports).
Whether your browser supports it or not, you can gloss over that with the webcomponentsjs polyfill, which you can obviously get via your package manager of choice (though typically, that's bower). This polyfill also helps with browsers which don't do shadow DOM, another feature you'll be needing to Polymerize the world.

Why use it?
Just like with any of the other frameworks (Angular, CanJs, Vue, Knockout, Ember, etc), the point is to make it possible to make great front-ends with cleaner, better code. All of these frameworks offer pros and cons -- I'm not an evangelist for any of them, though I'll slip in a word here that I find Angular 1.x the most convenient -- and it's still being updated (1.6.1 at last look) -- though I'll not go into all of that now as it's a bit of a diversion.

My 2c:
So far, I don't hate it but I'm not enthralled either. Polymer tries to treat everything as a DOM element and the analogy falls a little flat for things that should have been the equivalent of Angular services. Don't get me wrong -- in the Angular world, you should be making directives (or in 2.x, components) -- it's the Right Thing To Do. But often you have little bits of logic which should be neatly boxed in their own space and you would often make them available to consumers through some kind of reference/import (harder to refactor against) or some kind of dependency injection (as you might with Angular 1.x services; Angular 2 has a similar concept but I still find code requiring relative-path imports and that, combined with no (Java|Type)script equivalent of the power wielded by Visual Studio + Resharper for C# means that re-organising code is more effort than it really needs to be. However, I need to revisit -- perhaps there is a way to bend Angular2 to my will. But I digress.

It does seem as if unit testing is a secondary concern in the Polymer world -- and that makes me less excited about it. Officially, you're recommended to use Web Component Tester for testing, but it's based on Selenium which brings some drawbacks to the party: it's slow to start (making the TDD cycle tedious), it's noisy in the console (making grokking output a mission) and the only way people have managed to get it to work in CI is through trickery that can't be achieved on all platforms (like xvfb, or headless chromium, which uses a similar trick) -- and I need this to work on the client's TFS build servers. So a reasonable amount of effort went into getting testing working in Chrome (for developer machines, because debugging is easier) and PhantomJS (for the build server, because it's properly headless).

2. Typescript

What is it?
Typescript is a language with a transpiler which produces Javascript as a build artefact. That Javascript is what runs in the browser. Typescript is heralded as the panacea to so many of the problems of the web (and Node) worlds for it's typing system, allegedly making your code type-safe and saving you from the perils of a weakly typed (or untyped, depending on your perspective) language.

That argument is complete bull muffins.

Simply because I can do this:

interface SomeFancyClass {
  public name: string;
} 
// compiles just fine; name is just undefined -- like any other property 
//   you might attempt to read.
const notFancyAtAllButPretends = {} as SomeFancyClass;

There are also a few oddnesses along the way, like when you define an interface with one optional property and one not-optional -- suddenly you have to define all properties which aren't optional, so the code above may not work (this was in TS 1.5, so I'm assuming it's still the case). And other bits.

But I'm not here to hate on Typescript. Initially, I didn't see the point of it, but the biggest wins you'll get out of Typescript (imo) are:
  1. Being able to tell what a function needs by looking at the declaration instead of guessing from parameter names (or worse: having to read all of the logic inside that and every subsequent function where the arguments are passed on or partially-passed on).
  2. Helping your dumb editor to be less dumb: intellisense in Javascript is a mess. Some editors / IDE's get it right a lot of the time, some get it right some of the time, some just don't bother at all and some (I'm looking at you, Visual Studio) basically suggest every keyword they've ever seen and then crash. With Typescript, the load of figuring out intellisense is offloaded onto the developer, which seems like a bit of a fail at the outset, but the time you spend defining interfaces will be paid back in dev time later. Promise.
  3. vNext Javascript features with the easiest setup: yes, you can go .jsx and Babel all the way, but I found Babel (initially) to be more effort to get working. I now use it as part of my build chain (more on that later), but Typescript is still at the front, so I can get the goodness above. Features I want include async/await (done properly, not to be confused with the .net abortion), generators, more features on built-in types, classes (which confuse the little ones less than prototypes :/ ) and other stuff. Again, I could get most of this from Babel (probably all, if I get hungry with presets), but Typescript has client acceptance and street cred. So, you know, whatever works.
Why use it?
Because of all the great points above. If you transpile to es5, you have the simplest setup to get newer features in your code, but you won't be able to use async/await. If you transpile to es6, you can have all the features, but dumb browsers will stumble. This is where I bring in Babel to do what it does well and transpile down to es5 with all the required shims so that everyone can have Promises and other goodies.
In addition, you will need to end up with es5 to satisfy the Polymer build tool, Vulcanize. Which is why my example repo transpiles eventually down to es5.
Basically, Typescript allows you to leverage new language features to write more succinct, easier-to-maintain (imo) code.

My thoughts:
I like Javascript, I really do. I hate stupid browsers and browsers which are "current-gen" but still don't support simple features like Promises. I also hate stupid IDE's and I absolutely loathe time wasted whilst you restart an editor that crashes often (so I don't use VS for js or ts, sorry) or whilst you're trying to (re-)load all code in the domain so you can call some of it. Typescript helps me to be more productive, so, after initially being quite open about not seeing the point, play-testing has made me like it.

3. Gulp

What is it?
There are more than enough Javascript build systems, but the ones people tend to make a noise about are Grunt, Gulp and Webpack.

I came into the game late, so Grunt was already being succeeded by Gulp. Gulp is analogous to Make in that you define tasks with their own logic and dependencies. It's very powerful because there are bajillions of modules built for it and it does practically everything with pipes, so it can be quite elegant. It (like Grunt), doesn't actually have a focused purpose: it's a task system, not actually a build system, but it suits builds very well, with some work.

Webpack is quite a focused build system which also has a development server with the ability to automatically rebuild and reload when changes are made, making the design-dev feedback cycle pleasantly tight.
Unfortunately, I haven't (as yet) managed to get Webpack to play nicely with Vulcanize (the tool used to compress and optimise Polymer components). I read that it's possible, so it's quite likely that my Webpack-fu is simply lacking. At any rate, the client is already using Gulp, so it's accepted there and easier for them to maintain. So Gulp it is.

Why use it?
You need some process to perform the build chain:
  • Transpile (and hopefully lint) Typescript
  • Run tests
  • Pack / optimize for distribution / release
You could use batch files for all I care, but having a tasking system allows you to define small tasks that are part of the whole and then get all of them to run in the correct order. Gulp is well supported and has more plugins than you can shake a stick at. There's a lot of documentation, blog material and StackOverflow questions/answers, so if you need a primer or hit a problem, finding information is easy.
On the flip side, it is just a tasking system, so you're going to write more code than with Webpack -- but you'll also have total control.

My 2c:
I've used Gulp a reasonable amount before. I even have a free, open-source collection of gulp tasks for the common tasks involved in building and testing .net projects (and projects with karma-based Javascript tests). It works well and I'm not afraid of the extra time to set up. If you break your tasks up into individual files and use the require-dir npm module to source them into your master gulpfile.js, you can get a lot of re-usability and easy-to-manage code.

Bootstrapping a Polymer project

Example code for the below can be found at: https://github.com/fluffynuts/polymer-ts-scratch, which is free to use and clone, if you find it useful.

Since testing is of primary importance, I need to know that I have the tooling available to write and run tests both on my dev machine and at the build server. Tests need to run reasonably quickly so that your TDD cadence isn't tedious. Web Components Tester fails both of these, so we're looking at using Karma to run the tests as we're going to need a DOM, so the pure Jasmine or Mocha runner on their own won't help.

The testing strategy is to have all Polymer components properly registered and actually create them in the DOM, after which they can have methods and properties stubbed/spied for testing and we can test as we would any Javascript logic.

Now running Jasmine tests via Karma isn't all that novel. But the existing team had been unable to run their tests at the build server because they couldn't get Polymer tests to run in PhantomJS, but that turned out to just be a timing issue: whilst Chrome has Polymer bootstrapped in time before the elements are loaded, Phantom appears to be doing some stuff either out-of-order or just plain asynchronously (more likely), so a first pass at testing Polymer components in PhantomJS will yield negative results: essentially your Polymer components aren't created -- you just get arbitrary "unknown" elements in the page -- and it becomes obvious when the {element}.$ (Polymer's way of getting Polymer stuff attached to your element) is undefined.

However, timing issues with test bootstrapping is not something I've never seen with Karma before. I remembered futzing about with window.__karma__.start and the magick comes in from test-setup.js, where I hijack Karma's start function and kick it off manually after webcomponents-lite.js has run in all of its logic and emitted the WebComponentsReady event.

So now the tests are running in PhantomJS. Great.

But I also notice that Polymer does a lot in the DOM via the element's template. Since the template is rendered to the final result once the element has been registered, testing that the template has been set up correctly can be tricky. You could test behavior (so trigger an {enter} keypress on your search entry and check that your search handler was called) -- and that's not wrong, but perhaps a step higher than I want to be initially (but a test which should come eventually).
I'd like to break this down into 3 tests:
  1. Does the template have the on-search attribute defined and set to bind to my handler?
  2. Does my Polymer element actually implement a method with that name (ie, does the handler exist)?
  3. Can I trigger the handler from the expected user behavior in the element (ie, pressing {enter} in the search box)
I want to do this to make failures more obvious: if we change the template, two tests fail: the behavioral one and the one testing that our template is correctly defined -- so we know where the problem is. Likewise if we change the Polymer element code (ie, rename or remove the function), we get two failures which point us to fixing the script code, not the template.

The problem, as stated above, is that you can't easily see the template code once the element is registered: creating an instance of the element wipes out template code in favour of final, rendered code.

I've used jsdom before and it's really cool: an implementation of a browser DOM which can be used from within NodeJS, for when you want to do a lot of the work that a browser does without actually invoking a browser.
Being a Node module, it's all split out neatly into different functional scripts, require()'d in as necessary. This won't play nicely in the browser, but we can use browserify to deal with that (indeed, there's a great discussion about jsdom and browserify here, including a bit of a tongue-in-cheek discussion about not having a DOM in the browser...).

browserify -r jsdom -s jsdom -o jsdom.js
This works fine in Chrome because browserify only bundles up the code -- it doesn't do any ES transforms on it and the jsdom code contains keywords like const.
The first time I ran this in PhantomJS, it barfed and failed miserably -- but that's OK: we already have Babel in the project (see stuff about Typescript and es5/es6 above), so we can transform to es5 and include that and PhantomJS is happy.

The above has been captured in the npm script "jsdom" in the example repo.

So now the strategy for testing templates is:
  1. Use jQuery to get the template as raw text from the Karma server with $.get()
  2. Parse with jsdom
  3. keep the useful template artifact in a variable that tests can get to. I normally prefer to not store stuff like this over the lifetime of a test suite, but I'd rather not go through (1) and (2) that often.
Karma configuration is required to load the following first:
  • jQuery
  • babel-polyfill

because I found that if they weren't first, I'd get errors about something trying to extend an object which wasn't supposed to be extended. We need the babel-polyfill to support the es5'd jsdom we made earlier. We also get karma to serve up our jsdom.js that we created above (by serving everything under src/specs/lib), ensuring that it's embedded in the karma test page so we can use the global jsdom declared in our test-utils/interfaces.ts. To reiterate: load order is important.

The highlights from here are:
  • src/specs/test-setup.ts is shows how we can hijack the karma start function to call it when we're good and ready. 
  • src/specs/sts-entry.ts shows some rudimentary testing of a Polymer element as loaded into the DOM, as it would be in live code
  • src/specs/sts-consumer.ts shows some rudimentary DOM testing of the template for sts-consumer (which is a contrived example: it simply wraps an sts-entry in a div)
Another goodie I found during this process is gulp-help which makes your gulpfile even more discoverable for other team members with very little work. Not only can you annotate simple help for tasks, you can have the help omit tasks by providing false for the "help". In that way, you can end up with succinct help for your most interesting (typically top-level) tasks. Win!

Wednesday, 15 February 2017

Gentoo adventures: overlays are great -- but you can make them even better!

AKA "how I was going to code but got side-tracked with interesting Gentoo stuff instead"


One of the features I enjoy about Gentoo is overlays which are functionally equivalent to Ubuntu's PPA repositories: places where you can get packages which aren't officially maintained by the main channel of the distribution.

Of course, just like with PPAs, you need to pay attention to the source from which you're getting this software -- you are about to install software onto your Linux machine, which requires you to run as root and said software could do nefarious stuff during installation -- let alone after installation, when you're running that software.

So, naturally, just like with any software source (in the Windows world, read: ALL software, because basically none of it is vetted by people with your interests in mind), you need to check that (a) you're happy with the advertised functionality of said software and (b) you trust the source enough that the software upholds the contract to provide that functionality -- and only that functionality -- and not trashing your system, leaking your passwords, killing your hamster, drinking all your beer or wall-hacking in an fps and calling you fag or anything stupid like that... But I digress.

Note that in the discussion below, I'll use the term "package" because that's logically what I'm used to; however the more correct term in Gentoo land is "atom" -- and I'll use that sometimes too (: I'm slowly evolving (:

Anyway, warnings about nasty coders aside, there's another concern: package clashing. Let's say, for example, you're looking for a source for your favourite editor (it may even be VSCode, which is a reasonable editor, though if I really had to pick a favourite, it would be (g)vim -- but VSCode is high up on the list. Anyway, let's say you were looking for VSCode on Linux (which is possible) and you happened to find an overlay providing it. That overlay may also provide other packages with the same category/name as package which are in the main source set. Or may conflict with another overlay for a similar reason. So a good idea is to start by masking the entire overlay, adding the following to a file in your /etc/portage/package.mask (I have an overlays file in there, and add one line per overlay):

*/*::{overlay name}

where {overlay name} is obviously replaced by the name of the new overlay you added with layman (reminder: you add an overlay with layman -f -a {overlay name})

Next, you unmask the package you actually want from that overlay, in /etc/package.unmask (again, I have an overlays file in there for this purpose):

### {overlay name}
{category}/{atom}

or, a concrete example, using visual studio code:

### jorgicio
app-editors/visual-studio-code

Now you can do an emerge -a {atom} and see if you have other requirements to meet (licenses or keywords for example: many overlays will require the testing keyword for your architecture, which, for me is: ~amd64)

The advantage from the above is that you have far less chance of overlays duking it out as to who provides the packages you want as a lot of overlay maintainers maintain many more than just one package in their overlays. You can still search for packages which exist in those overlays with emerge --search {atom} and unmask them as required. Also don't forget you can find packages (and their overlays) on the great Zugaina site!

In addition to the above, I've discovered today that, aligned with Gentoo's philosophy of requiring the user to actively select installations (instead of just foisting them on the user), when you add an overlay with layman, by default, it will not auto-update!. So, I expected that emerge --sync would also sync added overlays -- and by default, it won't. You can manually sync overlays with layman's --sync command (passing an overlay name or the magic ALL string to sync all of your added overlays), but I'd prefer to have this as part of my regular sync, analogous to apt-get update being applicable to all sources.
Indeed, I only figured this out after installing visual-studio-code and wondering why the editor kept reminding me to update when I couldn't see an update in the overlay I was referencing -- it was all because the "index" for that overlay wasn't being synced... So the next issue is auto-updating.

If, like me, you'd like to auto-update, a quick look at emerge's rather extensive man page, under the --sync section, shows the following:

--sync Updates  repositories,  for which auto-sync, sync-type and sync-uri attributes are set in repos.conf

Interesting.

It turns out that repos.conf, in my case at least (and I believe it should be for any modern Gentoo system), is a directory (/etc/portage/repos.conf) with two files -- the all-important gentoo.conf for the main source and layman.conf, which contains entries for each added overlay. And in layman.conf, we see sections like:

[steam-overlay]
priority = 50
location = /var/lib/layman/steam-overlay
layman-type = git
auto-sync = No


The priority is also interesting -- because you can use this to decide, when there is a package conflict between sources providing the same atom, where that atom should come from. But it's not the topic of discussion today. Today, I'm more interested in that last line. Changing:

auto-sync = No

to

auto-sync = Yes

for each overlay yielded the results I wanted: when I emerge --sync, my overlays are updated too and I can get updates from them. Of course, the emerge --sync command takes a little longer to run -- but I don't mind: I can get shiny new stuff! Combined with the postsync.d trick outlined here to update caches and local indexes, you can also have fast emerge searching. FTW.

And now you can too (:

Parting tidbit: if perhaps you wanted to apply the masking/unmasking strategy from above to existing overlays, you can use the following command to list installed atoms from an overlay:

equery has repository {overlay}

then apply the */*:{overlay} mask and unmask the installed packages.

What's new in PeanutButter?

Retrieving the post... Please hold. If the post doesn't load properly, you can check it out here: https://github.com/fluffynuts/blog/...