<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[boneskull dot com]]></title><description><![CDATA[crude, but effective]]></description><link>https://boneskull.com/</link><generator>Ghost 2.0</generator><lastBuildDate>Mon, 06 Apr 2026 05:57:45 GMT</lastBuildDate><atom:link href="https://boneskull.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Recursive Directory Removal in Node.js]]></title><description><![CDATA[Recursive directory removal has landed in Node.js v12.10.0!  Here's a little story about how we got there.]]></description><link>https://boneskull.com/recursive-directory-removal-in-node-js/</link><guid isPermaLink="false">5d72e247f049103e7f1b00ab</guid><category><![CDATA[node.js]]></category><category><![CDATA[rimraf]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Mon, 09 Sep 2019 17:14:50 GMT</pubDate><media:content url="https://boneskull.com/content/images/2019/09/Sierpinsky_triangle_-evolution-.png" medium="image"/><content:encoded><![CDATA[<img src="https://boneskull.com/content/images/2019/09/Sierpinsky_triangle_-evolution-.png" alt="Recursive Directory Removal in Node.js"><p>Recursive directory removal <a href="https://github.com/nodejs/node/pull/29168">has landed</a> in Node.js v12.10.0!</p><p>This has been a long-standing feature request.  New Node.js developers often express incredulity  when they discover this particular “battery” isn’t included in Node.js.</p><p>Over the years, userland modules (<a href="https://npm.im/rimraf"><strong>rimraf</strong></a>, <a href="https://npm.im/rmdir"><strong>rmdir</strong></a>, <a href="https://npm.im/del"><strong>del</strong></a>, <a href="https://npm.im/fs-extra"><strong>fs-extra</strong></a>, etc.) have heroically provided what core did not. Thanks to the superbad maintainers-of and contributors-to these packages!</p><p>Here's a little story about how it came to pass—and why something <em>so seemingly simple</em> as <code>rm -rf</code> isn’t necessarily so.</p><h2 id="about-node-js-filesystem-operations">About Node.js’ Filesystem Operations</h2><p>First, I want to explain a bit about how Node.js works under the hood with regards to filesystem operations.</p><p><a href="https://libuv.org"><strong>libuv</strong></a> provides filesystem operations to Node.js.  Node.js’ <code>fs</code> module is just a JavaScript file which provides the <code>fs.*</code> APIs; those APIs call into an internal C++ binding (you could think of this as a “native module”). That binding is <em>glue</em> between <strong>libuv</strong> and the JavaScript engine (<strong>V8</strong>).</p><p>Here’s an example.  At the lowest level, <strong>libuv</strong> provides a C API (<code>uv_fs_rmdir</code>) to make the a system call to remove a directory.</p><pre><code class="language-js">const fs = require('fs');

// `rmdir` is just a function which calls into a C++ binding.
// The binding asks libuv to remove the &quot;/tmp/foo&quot; directory.
// Once libuv returns a result, the binding calls `callback`
fs.rmdir('/tmp/foo', function callback(err) {
  if (err) {
    // handle error
  }
});
</code></pre>
<p>Importantly, Node.js makes only a <em>single call </em>to <strong>libuv</strong> above<em>.</em></p><p>In fact, until recently, Node.js’ <code>fs</code> bindings follow a pattern: single calls into <strong>libuv</strong>. <code>fs.readFile</code>, <code>fs.stat</code>, <code>fs.unlink</code>; these are all just <em>one</em> call.</p><p><em>Oh</em>, that recent change? It was <a href="https://nodejs.org/api/fs.html#fs_fs_mkdir_path_options_callback">recursive <code>fs.mkdir</code></a>.  I’ll explain what makes it different.</p><h2 id="shell-operations-vs-system-operations">Shell Operations vs. System Operations</h2><p>Developers may not think about this much because it’s so well-abstracted by our tools.  Take <code>mkdir</code>, for example:</p><pre><code class="language-bash">$ mkdir ./foo
</code></pre>
<p><code>mkdir</code> is a command-line utility (which flavor, exactly, depends on your operating system).  It’s <em>not</em> a system call.  The above command may only <em>execute</em> a single system call, but the following may execute several:</p><pre><code class="language-bash"># creates dirs foo, then bar, then baz, ignoring dirs that already exist
$ mkdir -p ./foo/bar/baz
</code></pre>
<p>Unless our tools have <em>transactional</em> behavior—they can “commit” or “roll back” operations—it’s possible for this command to <em>partially</em> succeed (though maybe not obvious in this case, but trust me).</p><p>What happens if <code>mkdir -p</code> fails halfway through?  <em>It depends.</em> You get zero or more new directories.  Yikes!</p><p>If that seems weird, consider that the user may <em>want</em> to keep the directories it <em>did</em> create.  It’s tough to make assumptions about this sort of thing; cleanup is best left to the user, who can deal with the result as they see fit.</p><p>How does this relate to Node.js?  When a developer supplies the <code>recursive: true</code> option to <code>fs.mkdir</code>, Node.js will potentially ask <strong>libuv</strong> to make <em>several</em> system calls—<em>all, some, or none</em> of which may succeed.</p><p>Previous to the addition of recursive <code>fs.mkdir</code>, Node.js had no precedent for this behavior.  Still, its implementation is relatively straightforward; when creating directories, the operations must happen both <em>in order </em>and  <em>sequentially</em>—we can’t create <code>bar/baz/</code> before we create <code>bar/</code>! </p><p>It may be surprising, then, that a recursive <code>rmdir</code> implementation is another beast entirely.</p><h2 id="there-was-an-attempt">There Was An Attempt</h2><p>I was likely not the first to attempt to implement a recursive <code>rmdir</code> in Node.js at the C++ level, but I <em>did</em> try, and I’ll explain why it didn’t work.</p><p>The idea was that a C++ implementation could be more performant than a JavaScript implementation—that’s probably true!</p><p>Using <code>mkdir</code> as a template, I began coding.  My algorithm would perform a depth-first traversal of the directory tree using <strong>libuv</strong>’s <code>uv_fs_readdir</code>; when it found no more directories to descend into, it would call <code>uv_fs_unlink</code> on each file therein. Once the directory was clear of files, it would ascend to the parent, and finally remove the now-empty directory.</p><p>It worked!  I was very proud of myself.  Then I decided to run some benchmarks against <a href="https://npm.im/rimraf"><strong>rimraf</strong></a>. Maybe I shouldn't have!</p><p>I found out that my implementation was faster for a very small <em>N</em>, where <em>N</em> is the number of files and directories to remove.  But <em>N</em> didn’t have to grow very large for userland's <strong>rimraf</strong> to overtake my implementation.</p><p>Why was mine slower?  Besides using an unoptimized algorithm, I used recursive <code>mkdir</code> as a template, and <code>mkdir</code> works <em>in serial </em>(as I mentioned above).  So, my algorithm only removed <em>one file</em> at a time.  <strong>rimraf</strong>, on the other hand, queued up many calls to <code>fs.unlink</code> and <code>fs.rmdir</code>.  Because <strong>libuv</strong> has a thread pool for filesystem operations, it could speedily blast a directory full of files, only limited by its number of threads!</p><blockquote>Note that when we say Node.js is single-threaded, we're talking about the <strong>programming model</strong>.  Under the hood, I/O operations are multithreaded.</blockquote><p>At this point, I realized that if it was going to be “worth it” to implement at the C++ layer—meaning a significant performance advantage which outweighs-the- maintenance-costs-of-more-C++-code—I’d have to rewrite the implementation to manage its <em>own</em> thread pool.  Of course, there’s no great precedent for <em>that</em> in Node.js either.  It’d be possible, but very tricky, and best left to somebody with a better handle on C++ and multithreaded programming.</p><p>I went back to the <a href="https://github.com/nodejs/tooling">Node.js tooling group</a> and explained the situation.  We decided that the most feasible way forward would be a pure-JavaScript implementation of recursive directory removal.</p><h2 id="let-s-write-it-in-javascript-">Let’s Write It In JavaScript!</h2><p>Well, that was the idea, but we didn’t get very far.  We took a look at the <a href="https://github.com/isaacs/rimraf/blob/master/rimraf.js">source of <strong>rimraf</strong></a>, which is the most popular userland implementation. It’s not as straightforward as you’d expect!  It covers many edge cases and peculiarities (and all of those hacks would need to be present in a Node.js core implementation; it needs to work like a consumer would expect).</p><p>Furthermore, <strong>rimraf</strong> is stable, and these workarounds have proven themselves to be robust over the years that it’s been consumed by the ecosystem.</p><p>I won’t attempt to explain what <strong>rimraf</strong> must do to achieve decent performance in a portable manner—but rest assured it’s sufficiently <em>non-trivial</em>.  <em>So</em> non-trivial, in fact, that it made more sense to just <em>pull <strong>rimraf</strong> into Node.js core</em> instead of trying to code it again from scratch.</p><p>So that’s what we did.</p><h2 id="it-s-just-rimraf">It’s Just rimraf</h2><p><a href="https://github.com/iansu">Ian Sutherland</a> extracted the needed code from <strong>rimraf</strong>.  In particular, <strong>rimraf</strong> supplies a command-line interface, and we didn’t need that.  For simplicity (and to eliminate dependencies) glob support (e.g., <code>foo/**/*.js</code>) was also dropped (though it <a href="https://github.com/nodejs/tooling/issues/38">may still have a future</a>).  After this, it was a matter of integrating it into a Node.js-style API, and the needed docs and tests.</p><p>To be clear, recursive directory removal in Node.js does <em>not</em> make rimraf obsolete. It <em>does</em> mean that for many use cases, Node.js’ <code>fs.rmdir</code> can get the job done.  Stick with <strong>rimraf</strong> if you need globs or a portable command-line utility.</p><p>Thanks to <a href="https://github.com/isaacs/">Isaac Schlueter</a> for <strong>rimraf</strong>—and to bless Node.js’ copy-and-paste efforts.</p><h2 id="in-conclusion">In Conclusion</h2><p>That’s the story of Node.js’ recursive <code>rmdir</code> thus far. Want to help write the rest? Come participate in the <a href="https://github.com/nodejs/tooling">Node.js Tooling Group</a>, where we’re looking to make Node.js <em>the best platform it can be</em> for building CLI apps.</p><blockquote>Acknowledgements to Isaac Schlueter and Ian Sutherland for reviewing this post.</blockquote>]]></content:encoded></item><item><title><![CDATA[Mocha v6 adds Configuration File Support & Drops Node.js v4.x]]></title><description><![CDATA[Mocha v6 will drop Node.js v4 support and add support for configuration files.]]></description><link>https://boneskull.com/mocha-v6/</link><guid isPermaLink="false">5c0ee468f049103e7f1b0076</guid><category><![CDATA[mocha]]></category><category><![CDATA[testing]]></category><category><![CDATA[node.js]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Tue, 11 Dec 2018 16:00:00 GMT</pubDate><content:encoded><![CDATA[<p>The next major release of Mocha, v6.0.0, will be released "in the near future."   Since it contains significant changes to command-line flags and configuration, we're being cautious and plan on publishing one or more prerelease versions.</p><p>Any prerelease versions will be installable via <code>npm install mocha@next</code>.</p><p>Notably, given Mocha's commitment to only supporting <em>maintained</em> versions of Node.js, <strong>Mocha v6.0.0 will drop support for Node.js v4.x.</strong></p><p>But the big story is configuration file support!  Let's start with a little history.</p><h2 id="mocha-opts">mocha.opts</h2><p>The need for configuration was recognized early in Mocha's history.  </p><p>Before v6, the way you'd "configure" Mocha for command-line use would be to create a <code>mocha.opts</code> file.  This file would contain <em>actual</em> command-line flags which would be essentially grafted on to Node.js' <code>process.argv</code> Array.  These flags would then be combined with any user-supplied command-line arguments, then passed to the secondary <code>_mocha</code> executable.   This is how Mocha is able to provide "direct" support for Node.js flags--by invoking a child <code>node</code> process.</p><p>I'd call this method of "configuring" a command-line app "unusual" at best; no effort was made to intelligently reconcile <code>mocha.opts</code> with user-supplied  arguments.  It was easy to encounter conflicts or other bad weirdness when using <code>mocha.opts</code>.</p><p>Several years ago, we knew that Mocha needed actual honest-to-goodness configuration files.  A lack of maintenance resources meant it just wasn't a high priority.</p><p>Meanwhile, technical debt kept accumulating around Mocha's command-line option parsing.  Coupled with new features in Node.js (e.g., <code><a href="https://nodejs.org/api/process.html#process_process_allowednodeenvironmentflags">process.allowedNodeEnvironmentFlags</a></code>), it became pragmatic to refactor this system.  </p><h2 id="option-parrrsing">Option Parrrsing</h2><p>With v6, Mocha adopts the powerful <a href="http://yargs.js.org">yargs</a> for argument parsing.  It offers features and control that Mocha had been missing, and ultimately provides not just a better<em> developer</em> experience, but a better experience for the user of Mocha. </p><p>yargs just so happens to support configuration ("RC") files <em>and </em>loading of options via <code>package.json</code> out-of-the-box.  Since we were already shredding the option-parsing code, this refactor became a great opportunity to tackle Mocha's "configuration" problem.</p><blockquote>Even if it <em>was</em> a great opportunity, it was still more work than anticipated!  This article is <em>not </em>a post-mortem, however.  In the end, it was worth the effort, and I think Mocha's users will appreciate it.</blockquote><p>With the help of yargs, Mocha v6 supports configuration via JS, JSON, or YAML RC file, <code>package.json</code> <em>and</em> <code>mocha.opts</code>.  </p><h2 id="introducing-mocharc-whatever">Introducing .mocharc.whatever</h2><p>Instead of (or <em>in addition to</em>, if you please) <code>mocha.opts</code>, Node.js users of Mocha can now create a <code>.mocharc.js</code>, <code>.mocharc.json</code>, <code>.mocharc.yml/yaml</code>, or add a <code>mocha</code> property to a <code>package.json</code>.  This is the same kind of thing that, say, <a href="http://eslint.org">ESLint</a> supports (though not identical).</p><p>Here's an example <code>mocharc.yaml</code> containing Mocha's defaults:</p><pre><code class="language-yaml">diff: true
extension:
  - js
opts: ./test/mocha.opts
package: ./package.json
reporter: spec
slow: 75
timeout: 2000
ui: bdd
</code></pre>
<p>This can be JSON instead:</p><pre><code class="language-json">{
  &quot;diff&quot;: true,
  &quot;extension&quot;: [&quot;js&quot;],
  &quot;opts&quot;: &quot;./test/mocha.opts&quot;,
  &quot;package&quot;: &quot;./package.json&quot;,
  &quot;reporter&quot;: &quot;spec&quot;,
  &quot;slow&quot;: 75,
  &quot;timeout&quot;: 2000,
  &quot;ui&quot;: &quot;bdd&quot;
}
</code></pre>
<p>If you please, that same JSON could be in the <code>mocha</code> property of your <code>package.json</code>.  If you need some special logic, here's the same in JavaScript:</p><pre><code class="language-js">module.exports = {
  diff: true,
  extension: ['js'],
  opts: './test/mocha.opts',
  package: './package.json',
  reporter: 'spec',
  slow: 75,
  timeout: 2000,
  ui: 'bdd'
};

</code></pre>
<p>Each option name corresponds to a command-line option as listed in <code>mocha --help</code> (which has also been overhauled).</p><h2 id="things-to-know-about-options">Things to Know about Options</h2><ol><li>On the command-line, any boolean<em> </em>flag can be negated by prepending <code>--no-</code> to the flag.  For example, <code>--no-diff</code> will disable diffs.  This is appropriate for <code>mocha.opts</code>.  But in a configuration file, you should use <code>diff: false</code> or its equivalent.  You <em>could </em>use <code>no-diff: true</code>, but that's silly, right?</li><li>Any option of type <code>array</code> (use <code>mocha --help</code> to see these) can be specified multiple times.  These will be concatenated, so a <code>require: esm</code> in your <code>.mocharc.yml</code> and a <code>--require my-thing</code> on the command-line will result in <code>"require": ["my-thing, "esm"]</code>.  Command-line arguments will be shifted on to the array.</li><li>Mocha loads config using the following priority: command-line args first, RC file second, <code>package.json</code> third, <code>mocha.opts</code> fourth, and then finally Mocha's own defaults. 	</li><li>To specify a test file or directory in an options file, use the <code>spec</code> option.  If you care about order, use <code>file</code> instead.</li><li>Aliases are allowed in config files; see these in Mocha's "help" output.</li><li>When run via <code>mocha</code>, Node.js and V8 flags are <em>also</em> supported in configuration files and <code>package.json</code> (<code>mocha.opts</code>already did this).</li><li><em>Any </em>V8 flag can be supplied by prepending <code>--v8-</code> to the flag name.  For example, if you wanted <code>--randomize-hashes</code>, that'd be <code>--v8-randomize-hashes</code> on the command line, or <code>v8-randomize-hashes: true</code> in a YAML config.  Only supported when running Mocha via <code>mocha</code>.</li><li>Mocha only supports the flags your current version of <code>node</code> supports.  Again, only supported when running Mocha via <code>mocha</code>.</li><li>Unknown flags/options are ignored; <code>--butts</code> does nothing</li></ol><h2 id="try-it-out-">Try It Out!</h2><p>Some example config files are available <a href="https://github.com/mochajs/mocha/tree/master/example/config">in the repo</a>, and our <a href="https://mochajs.org">documentation</a> (<a href="https://github.com/mochajs/mocha/issues/3206">unfortunately</a>) already reflects these changes.</p><p>When released--likely this week--I encourage you to check out the first prerelease of Mocha v6, which should be version <code>6.0.0-0</code>, installable via <code>npm i mocha@next</code>.</p>]]></content:encoded></item><item><title><![CDATA[create-yo: Use any Yeoman generator. Don't install stuff.]]></title><description><![CDATA[TL;DR: Try using npm init yo <generator> instead of npm install -g yo generator-<generator>; yo <generator>.]]></description><link>https://boneskull.com/create-yo/</link><guid isPermaLink="false">5bf34dbaf049103e7f1b0064</guid><category><![CDATA[node.js]]></category><category><![CDATA[npm]]></category><category><![CDATA[yeoman]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Tue, 20 Nov 2018 16:00:00 GMT</pubDate><content:encoded><![CDATA[<p>I wanted to introduce a little tool I made called create-yo.<br>Beginning in <a href="https://blog.npmjs.org/post/174001864165/v610-next0">npm v6.1.0</a>, <code>npm init &lt;something&gt;</code> lets you magically invoke a scaffolding tool (think <a href="https://npm.im/create-react-app">create-react-app</a>) using <a href="https://medium.com/@maybekatz/introducing-npx-an-npm-package-runner-55f7d4bd282b">npx</a>.</p><blockquote>If you’re not familiar with <code>npx</code>, it’s a tool installed alongside <code>npm</code> which allows execution of command-line Node.js packages <em>without</em> first having to <code>npm install --global &lt;package-name&gt;</code> .</blockquote><p>For example, running <code>npm init react-app</code> will use <code>npx</code> to grab <code>create-react-app</code>, then execute its script specified in its <code>bin</code> property of its <code>package.json</code>.  In other words, <code>npm init react-app</code> is the same as invoking <code>npx create-react-app</code>.</p><p>Since the scaffolding tool <a href="http://yeoman.io">Yeoman</a> has a large ecosystem already, I thought it might be cool to piggyback on the new(-ish) functionality of <code>npm init</code>.</p><h2 id="example">Example</h2><p>To run any Yeoman generator, all you need to do (given <code>npm</code> v6.1.0+ and Node.js v8.0.0+) is execute:</p><pre><code class="language-bash">$ npm init yo foo
</code></pre>
<p>…where <code>foo</code> refers to package <code>generator-foo</code>.  A more concrete example—to run Yeoman’s own <a href="https://npm.im/generator-generator">generator-generator</a>, would be this:</p><pre><code class="language-bash">$ npm init yo generator
</code></pre>
<p>After <code>npx</code> does its thing, you’ll be prompted to to complete the wizard.<br>It also supports subgenerators, e.g.:</p><pre><code class="language-bash">$ npm init yo generator:subgenerator
</code></pre>
<p>Invokes the <code>subgenerator</code> subgenerator of the <code>generator</code> generator. Yep.</p><h2 id="for-the-curious">For The Curious</h2><p><code>npx</code> calls <code>create-yo</code>’s executable, which in turn invokes <code>npx</code> (via <a href="https://npm.im/libnpx">libnpx</a>) to run <code>yo</code>’s executable.  It uses <code>npx</code>’s <code>--package</code> option to grab your generator.  It is not fancy.</p><p>I'm happy to make this work with Yarn if possible (and would <a href="https://github.com/boneskull/create-yo">accept a PR</a> to that effect), but don't use Yarn, so I won't be implementing it myself.</p><h2 id="links">Links</h2><p>Here’s <a href="https://github.com/boneskull/create-yo">create-yo on GitHub</a>, and its package on <a href="https://npmjs.im/create-yo">npmjs.com</a>.</p><h2 id="tl-dr">TL;DR</h2><p>Try using <code>npm init yo &lt;generator&gt;</code> instead of <code>npm install -g yo generator-&lt;generator&gt;; yo &lt;generator&gt;</code>.</p>]]></content:encoded></item><item><title><![CDATA[Upcoming Node.js Features Improve  Experience for CLI App Authors]]></title><description><![CDATA[Node.js will ship a few features in the current release line which may be of interest to those writing command-line applications. Let's take a closer look.]]></description><link>https://boneskull.com/nodejs-features-for-cli-authors-in-v10/</link><guid isPermaLink="false">5b53a2ddb4b1760603db9f71</guid><category><![CDATA[node.js]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Wed, 29 Aug 2018 20:47:00 GMT</pubDate><media:content url="https://boneskull.com/content/images/2018/09/stone-tools.gif" medium="image"/><content:encoded><![CDATA[<img src="https://boneskull.com/content/images/2018/09/stone-tools.gif" alt="Upcoming Node.js Features Improve  Experience for CLI App Authors"><p><a href="https://nodejs.org">Node.js</a> will ship a few features in the current release line (v10.x; see <a href="https://github.com/nodejs/release">release schedule</a>) which may be of interest to those writing command-line applications. Let's take a closer look-see.</p><h2 id="fs-readdir-optionally-outputs-file-types"><code>fs.readdir()</code> optionally outputs file types</h2><p><code>fs.readdir()</code> is simple; it outputs a list of filenames.  Of course, those filenames may represent directories, symbolic links, sockets, devices, hot dogs, etc.  </p><p>If you want to know, say, which of those filenames represent <em>directories</em>, you will have to call <code>fs.stat()</code>.  This is ultimately kind of silly, because the underlying method in <a href="https://libuv.org/">libuv</a> actually provides this information; Node.js simply discards it.</p><p>Silliness is forbidden, so <a href="https://github.com/bengl">Bryan English</a> created <a href="https://github.com/nodejs/node/pull/22020">nodejs/node#22020</a> to address this malfeasance.  <code>fs.readdir()</code>, <code>fs.readdirSync()</code>, and <code>fs.promises.readdir()</code> (<a href="https://nodejs.org/api/fs.html#fs_fs_promises_api">experimental</a>) will now accept a new option, <code>withFileTypes</code>.  Use it like this:</p><pre><code class="language-js">// Reads contents of directory `/some/dir`, providing an `Array` of
// `fs.Dirent` objects (`entries`)
fs.readdir('/some/dir', { withFileTypes: true }, (err, entries) =&gt; {
  if (err) throw err;
  entries.filter((entry) =&gt; entry.isDirectory())
    .forEach((entry) =&gt; {
      console.log(`${entry.name} is a directory`);
    });
});

// or
const entries = await fs.promises.readdir('/some/dir', {
  withFileTypes: true
});

// or
const entries = fs.readdirSync('/some/dir', { withFileTypes: true });
</code></pre>
<p>Above, <code>entries</code> is an array of <code>fs.Dirent</code> objects, which are <em>similar</em> to <code>fs.Stats</code> objects.  These objects contain the same <em>methods</em> as <code>fs.Stats</code> objects, but <em>no other properties</em> except <code>name</code>.  You <em>cannot</em> get information about file’s size from an <code>fs.Dirent</code> object, for example.</p><p>The <code>withFileTypes</code> feature provides a more performant and convenient API to work with if you need file type information after reading a directory.  Since it’s optional behavior, it introduces no breaking changes.</p><h2 id="fs-mkdir-optionally-creates-directories-recursively"><code>fs.mkdir()</code> optionally creates directories recursively</h2><p>This is <code>mkdir -p</code>.  Over the years of Node.js’ existence, the <a href="https://npm.im/mkdirp"><code>mkdirp</code></a> module and its ilk have courageously provided this functionality from userland.  In fact, <code>mkdirp</code> has become ubiquitous, leading some to wonder why it’s not part of the core API.  This author wondered that as well!</p><p>The short answer is that Node.js has a “small core” philosophy.  Whether you agree with that philosophy or not, we can all agree <code>mkdirp</code> is a wildly popular module which provides a common filesystem operation.  It’s proven its necessity, and that’s why Node.js merged the <a href="https://github.com/bcoe">Benjamin Coe</a>-created PR <a href="https://github.com/nodejs/node/pull/21875">nodejs/node#21875</a>.</p><p><code>fs.mkdir()</code>, <code>fs.mkdirSync</code> and <code>fs.promises.mkdir()</code> will now support the <code>recursive</code> option.  Use it like this:</p><pre><code class="language-js">// Creates /tmp/a/apple, regardless of whether `/tmp` 
// and /tmp/a exist.
fs.mkdir('/tmp/a/apple', { recursive: true }, (err) =&gt; {
  if (err) throw err;
});

// or
await fs.promises.mkdir('/tmp/a/apple', { recursive: true });

// or
fs.mkdirSync('/tmp/a/apple', { recursive: true });
</code></pre>
<p><strong>Note:</strong> A recursive <code>fs.mkdir()</code> is <em>not</em> an atomic operation.  It’s…unlikely…to fail halfway through, <em>but it could.</em></p><p><strong>Another Note: </strong>There's an <a href="https://github.com/nodejs/node/pull/22302">open PR</a> concerning how to support feature detection here.</p><h2 id="process-allowednodeenvironmentflags-queryable-iterable-flags"><code>process.allowedNodeEnvironmentFlags</code>: Queryable &amp; Iterable Flags</h2><p><code><a href="https://nodejs.org/api/cli.html#cli_node_options_options">NODE_OPTIONS</a></code> is an environment variable supported as of Node.js v8.0.0. If present, it works just like flags passed to the <code>node</code> executable.  The flags allowed in <code>NODE_OPTIONS</code> have a unique property: they do not <em>fundamentally</em> alter the default behavior of the <code>node</code> executable.  What does that mean, exactly?</p><p>Flags which don’t fundamentally alter <code>node</code>’s behavior retain two properties:</p><ul><li>Executing <code>node --some-flag</code> will open a REPL</li><li>Executing  <code>node --some-flag file.js</code> will run <code>file.js</code></li></ul><blockquote>A handful of flags are excluded from <code>NODE_OPTIONS</code>, such as <code>--preserve-symlinks</code>, due to security concerns.</blockquote><p>This means flags like <code>--help</code>, <code>--version</code>, <code>--check</code>, etc., aren’t supported by <code>NODE_OPTIONS</code>.</p><p>Now, if you’re using <code>node</code> with flags in production, you’re <em>probably</em> using flags supported by <code>NODE_OPTIONS</code>.  And you <em>probably </em>have some tests.  Assuming you <em>do</em> have tests, if you’re using a test runner which wraps the <code>node</code> executable (such as <a href="https://mochajs.org">Mocha</a>), you will need to pass <em>those same flags</em> to the test runner’s executable.  For example:</p><pre><code class="language-bash"># production app
$ node --experimental-modules ./app.js
# test runner
$ mocha --experimental-modules &quot;test/**/*.spec.js&quot;
</code></pre>
<p>That’s a nice user experience, and it’s fine and good <em>as long as the test runner supports the flags you want to use</em>.  But it <em>also</em> means  the test runner must add support for any given flag in <code>NODE_OPTIONS</code> and pass it along to Node.js.  This is more manual labor to maintain than you’d expect; many flags are considered “temporary,” and <em>only some</em> flags support swapping any dash (<code>-</code>) for an underscore (<code>_</code>) or vice-versa.</p><p><a href="https://github.com/boneskull">This author</a> created PR <a href="https://github.com/nodejs/node/pull/19335">nodejs/node#19335</a> which adds   <code>process.allowedNodeEnvironmentFlags</code>. Using this, test runners and other CLI apps needing to wrap the <code>node</code> executable won’t need to manually track new flags as they are added to (and removed from) Node.js.</p><p>To detect whether or not a flag as provided in <code>process.argv</code> is a <code>NODE_OPTIONS</code> flag—and thus a useful one, for our purposes—we can do this:</p><pre><code class="language-js">// cli.js
const command = ['cli2']; 
process.argv.slice(2).forEach((arg) =&gt; {
  if (process.allowedNodeEnvironmentFlags.has(arg)) {
    command.unshift(arg);
  } else {
    command.push(arg);
  }
});
command.unshift(process.execPath);
// `command` looks like:
// ['node', '--node-flag', 'cli2', '--cli2-flag']
</code></pre>
<p><code>process.allowedNodeEnvironmentFlags</code> is a <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set">Set</a>-like object.  It can't be mutated—<code>add()</code>, <code>delete()</code> and <code>clear()</code> operations will silently fail—and its <code>has()</code> method will return <code>true</code> for any allowed permutation of a <code>NODE_OPTIONS</code> flag.  For example:</p><pre><code class="language-js">process.allowedNodeEnvironmentFlags.has('--stack-trace-limit') // true

process.allowedNodeEnvironmentFlags.has('--stack_trace_limit') // true

process.allowedNodeEnvironmentFlags.has('--stack_trace-limit') // true
</code></pre>
<p><code>has()</code> will also return <code>true</code> for a disallowed (but convenient) permutation: omitted leading dashes.  This means:</p><pre><code class="language-js">process.allowedNodeEnvironmentFlags.has('--experimental-modules') // true

process.allowedNodeEnvironmentFlags.has('experimental-modules') // true

// careful with your underscores!
process.allowedNodeEnvironmentFlags.has('--experimental_modules') // false
</code></pre>
<p>Since it’s a <code>Set</code>-like object, you can iterate over it using <code>forEach</code> and other typical methods:</p><pre><code class="language-js">process.allowedNodeEnvironmentFlags.forEach((flag) =&gt; {
  // --enable-fips
  // --experimental-modules
  // --experimental-repl-await
  // etc.
});
</code></pre>
<p>Only the canonical format (as shown in <code>node --help</code>) of each flag will appear when iterated over; it won’t contain any duplicates.</p><p>This addition enables tooling authors to more easily wrap the <code>node</code> executable, and provides runtime insight into the nature of flags in <code>process.argv</code>.</p><h2 id="horn-tooting">Horn-Tooting</h2><p>If you’re interested in helping make Node.js a better experience for authors of command-line tools, <em>click on these links, dammit</em>:</p><ul><li>Join our <a href="https://github.com/nodejs/tooling">new tooling group</a>!</li><li>Join <a href="https://devtoolscommunity.herokuapp.com/">this Slack</a> for Node.js tooling authors!!!</li><li>Participate in a <a href="https://github.com/nodejs/user-feedback">Node.js user feedback</a> tooling group session!!!!!!!</li></ul>]]></content:encoded></item><item><title><![CDATA[How to use an ESP8266 with Johnny-Five]]></title><description><![CDATA[I wrote this short tutorial for a PDXNode “Nodebots” hackathon.  It should work on Windows, Mac, and Linux boxes. ]]></description><link>https://boneskull.com/how-to-use-an-esp8266-with-johnny-five/</link><guid isPermaLink="false">5b5f54aab4b1760603db9f74</guid><category><![CDATA[node.js]]></category><category><![CDATA[esp8266]]></category><category><![CDATA[arduino]]></category><category><![CDATA[javascript]]></category><category><![CDATA[tutorial]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Mon, 30 Jul 2018 18:31:29 GMT</pubDate><media:content url="https://boneskull.com/content/images/2018/07/nodebot-1000x1000.png" medium="image"/><content:encoded><![CDATA[<blockquote>
<img src="https://boneskull.com/content/images/2018/07/nodebot-1000x1000.png" alt="How to use an ESP8266 with Johnny-Five"><p>I wrote this short tutorial for a <a href="http://pdxnode.org">PDXNode</a> “Nodebots” hackathon.  It <em>should</em> work on Windows, Mac, and Linux boxes.  I'm reposting it here, since it may be useful.</p>
</blockquote>
<h2 id="prerequisitesoftware">Prerequisite Software</h2>
<ul>
<li>Node.js <strong>v8.x</strong></li>
<li>Arduino IDE v1.8.5</li>
<li>CP2104 driver (found linked <a href="https://learn.adafruit.com/adafruit-feather-huzzah-esp8266/using-arduino-ide">here</a>; if your dev board needs a different driver, install that instead)</li>
</ul>
<p>A toolchain to build native modules may be required.  <a href="https://www.npmjs.com/package/windows-build-tools">windows-build-tools</a> may be the quickest way to get setup on Windows.  macOS users will need to install XCode; Linux users will need to install <code>build-essential</code>, <code>python</code>, and likely some other stuff.</p>
<blockquote>
<p>You're welcome to try a newer (or older version) of Node.js--likewise Arduino IDE--but YMMV.  The firmware can also be flashed via means <em>other</em> than Arduino IDE, if you are so inclined.</p>
</blockquote>
<blockquote>
<p><a href="https://github.com/christianmello">@christianmello</a> writes that macOS users may be able to install the drivers via <a href="http://brew.sh">Homebrew</a>:</p>
<pre><code class="language-bash">$ brew tap homebrew/cask-drivers
$ brew cask install silicon-labs-vcp-driver
</code></pre>
</blockquote>
<h2 id="addesp8266boardsupporttoarduinoide">Add ESP8266 Board Support to Arduino IDE</h2>
<ol>
<li>Launch Arduino IDE</li>
<li>Open <strong>Preferences</strong></li>
<li>Add <code>http://arduino.esp8266.com/stable/package_esp8266com_index.json</code> to the <strong>Additional Boards Manager URLs</strong> input.<br>
<em>If you have something in this field already, you can add multiple URLs by delimiting with commas.</em></li>
<li>Click <strong>OK</strong></li>
<li>Navigate to menu <strong>Tools &gt; Board.. &gt; Boards Manager</strong></li>
<li>Find <strong>esp8266 by ESP8266 Community</strong> in the list.  Click <strong>Install</strong>.</li>
<li>Once this is complete, click <strong>OK</strong>.</li>
</ol>
<h2 id="flashdevboard">Flash Dev Board</h2>
<ol>
<li>
<p>Plug in dev board via USB to your computer.</p>
</li>
<li>
<p>Back in Arduino IDE, in menu <strong>Tools &gt; Board</strong>, select “Adafruit Feather Huzzah ESP8266” (or your appropriate dev board) from the list.</p>
</li>
<li>
<p>In menu <strong>Tools &gt; Port</strong>, select the proper port.</p>
<ul>
<li>On Windows, this will be <code>COMx</code>.</li>
<li>On Linux, this will likely look like <code>/dev/ttyUSBx</code>.</li>
<li>On Mac, this will likely look like <code>/dev/tty.xxxxx</code>.</li>
<li>If you only see Bluetooth-related stuff, or otherwise can't find an appropriate port, ensure your driver is working, check your USB cable, check your dev board, etc.</li>
</ul>
</li>
<li>
<p>In menu <strong>File &gt; Examples … Examples for any board &gt; Firmata</strong> choose <strong>StandardFirmataWifi</strong>.</p>
</li>
<li>
<p>Modify this sketch by uncommenting line 85: <code>#define SERIAL_DEBUG</code>.  This will allow you to view debug output in Arduino IDE's Serial Monitor.</p>
</li>
<li>
<p>There will be a tab for a file <code>wifiConfig.h</code>.  Click this tab to open <code>wifiConfig.h</code>.</p>
</li>
<li>
<p>On line 119, enter the name of your WiFi network (in double quotes), e.g., <code>char ssid[] = &quot;foobar&quot;;</code> where <code>foobar</code> is your WiFi network name.</p>
</li>
<li>
<p>If using a secured network (requiring a password), on line 151, enter the WiFi network password in double quotes, e.g. <code>char wpa_passphrase[] = &quot;foobar-password&quot;;</code>.  <em>Otherwise</em>, uncomment line 183 for an unsecured network.</p>
</li>
<li>
<p>Click the <strong>Upload</strong> button (icon is a “right arrow”) in the toolbar.  This should flash your board with the firmware.</p>
</li>
<li>
<p>To confirm things are working, click the <strong>Serial Monitor</strong> button in the toolbar (icon is a magnifying glass).</p>
</li>
<li>
<p>Change the baud rate to 9600</p>
</li>
<li>
<p>Assert that you see something like:</p>
<pre><code class="language-plain">connected with SON OF ZOLTAR, channel 11
dhcp client start...
ip:10.0.0.49,mask:255.255.255.0,gw:10.0.0.1
IP will be requested from DHCP ...
Attempting to connect to WPA SSID: SON OF ZOLTAR
WiFi setup done
scandone
.SSID: SON OF ZOLTAR
IP Address: 10.0.0.49
signal strength (RSSI): -39 dBm
</code></pre>
<p>…where <code>SON OF ZOLTAR</code> is your WiFi network’s name.</p>
<p><strong>Note and/or copy the IP address.  You'll need it later.  This is important!</strong></p>
</li>
<li>
<p>Close the serial monitor window.</p>
</li>
</ol>
<h2 id="installjohnnyfivenodejsmodules">Install Johnny-Five &amp; Node.js Modules</h2>
<ol>
<li>Create a new directory.</li>
<li>Run <code>npm init -y</code> to generate an empty <code>package.json</code>.</li>
<li>Execute <code>npm install johnny-five etherport-client</code>.</li>
<li>Create <code>blink.js</code>:</li>
</ol>
<pre><code class="language-js">'use strict';

const {
  EtherPortClient
} = require('etherport-client');
const five = require('johnny-five');
const board = new five.Board({
  port: new EtherPortClient({
    host: '10.0.0.49',
    port: 3030
  }),
  repl: false
});

const LED_PIN = 2;

board.on('ready', () =&gt; {
  board.pinMode(LED_PIN, five.Pin.OUTPUT);
  // the Led class was acting hinky, so just using Pin here
  const pin = five.Pin(LED_PIN);
  let value = 0;
  setInterval(() =&gt; {
    if (value) {
      pin.high();
      value = 0;
    } else {
      pin.low();
      value = 1;
    }
  }, 500);
});
</code></pre>
<p>Replace <code>10.0.0.49</code> with the IP address you noted from the serial monitor.</p>
<p>You <em>may</em> need to change <code>LED_PIN</code> to <code>13</code>.  I forget what the number is for the builtin LED on Huzzah (this guide was tested on a <a href="http://wemos.cc">Wemos D1 Mini</a>, which is functionally equivalent in most ways which matter).</p>
<ol start="4">
<li>Save this file, and execute it.  You should see something like this:</li>
</ol>
<pre><code class="language-bash">$ node hello.js
1532643089553 SerialPort Connecting to host:port: 10.0.0.49:3030
1532643089556 Connected Connecting to host:port: 10.0.0.49:3030
</code></pre>
<p>You should also see the onboard LED blink, toggling on and off every half-second.  When satisfied, hit <code>Ctrl-C</code> to quit.</p>
<p>Any corrections or improvements appreciated!</p>
]]></content:encoded></item><item><title><![CDATA[Optimizing Mocha's Builds with Travis CI's Build Stages]]></title><description><![CDATA[Mocha, the JavaScript testing framework, has been a happy user of Travis CI for over six years.  This last week, Mocha’s team modified its build matrix to leverage Build Stages. I'll share what I've learned.
]]></description><link>https://boneskull.com/mocha-and-travis-ci-build-stages/</link><guid isPermaLink="false">5ac516b1b4b1760603db9f51</guid><category><![CDATA[mocha]]></category><category><![CDATA[travis-ci]]></category><category><![CDATA[ci]]></category><category><![CDATA[tutorial]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Mon, 09 Apr 2018 14:55:00 GMT</pubDate><media:content url="https://boneskull.com/content/images/2018/04/travis-mocha-3.png" medium="image"/><content:encoded><![CDATA[<img src="https://boneskull.com/content/images/2018/04/travis-mocha-3.png" alt="Optimizing Mocha's Builds with Travis CI's Build Stages"><p>A <a href="https://blog.travis-ci.com/2017-05-11-introducing-build-stages">little less than a year ago</a>, <a href="https://travis-ci.com">Travis CI</a> introduced a beta feature, <a href="https://docs.travis-ci.com/user/build-stages/">Build Stages</a>.<br>
Could it be maintainers in my world (that world being “userland tooling and libraries for Node.js”) don’t even know Build Stages are a Thing?  Despite the overwhelming popularity of Travis CI amongst OSS Node.js projects, I haven’t seen a lot of adoption.</p>
<p><a href="https://mochajs.org">Mocha</a>, the JavaScript testing framework, has been a happy user of Travis CI <em>for over six years</em>.  This last week, Mocha’s team modified its build matrix to leverage Build Stages (thanks <a href="https://twitter.com/Outsideris">@Outsideris</a>!).  I'll share what I've learned.</p>
<p><strong>I have written this article (which gets kind of dry) for users of Travis CI or those otherwise experienced in continuous integration software.</strong></p>
<h2 id="whybuildstages">Why Build Stages?</h2>
<p>If a project’s build consists of running <code>npm test</code> against a handful of Node.js versions, that project <em>probably</em> doesn’t need Build Stages. While Mocha’s build isn’t as complex as some—like those projects which need to compile or deploy—its build is non-trivial.</p>
<p>Foremost and first, Build Stages allow a build to “fail fast.”  Before Build Stages, <em>all</em> jobs in the matrix would run concurrently (with an optional concurrency limit).  A project’s build could not run, for example, an initial <a href="https://en.wikipedia.org/wiki/Smoke_testing_%28software%29">smoke test</a> to determine if it’s practical to run a more expensive test suite.  Build Stages eliminate extra work.</p>
<p>Build Stages can enable better dependency caching.  As mentioned, without Build Stages, all jobs would run concurrently, and each <a href="https://docs.travis-ci.com/user/caching">cache-compatible</a> configuration would “miss” the cache on its first try.  For Node.js projects, this means <em>repeated</em> installation of dependencies via <code>npm install</code>, and <em>ain’t nobody got time for that</em>.  By “warming up” Travis CI’s cache in preparation, we can prepare <code>npm</code> to work quickly in subsequent cache-compatible Build Stages.</p>
<p>By leveraging Build Stages—and <code>npm</code>’s new features—Mocha can run <strong>more tests in less time</strong>.  Join me in the <em>Bonesea Skullenger</em>, and we’ll dive deep.</p>
<p><img src="https://boneskull.com/content/images/2018/04/sea-cucumber.jpg" alt="Optimizing Mocha's Builds with Travis CI's Build Stages"> <small>Swimming elasipod sea cucumber.  Photo from <a href="http://www.photolib.noaa.gov/nurp/index.html">Vailulu'u 2005 Exploration, NOAA-OE</a> / <a href="https://www.noaa.gov">NOAA</a></small></p>
<h2 id="aboutmochasoldbuild">About Mocha’s Old Build</h2>
<p>Before the changes, this is what Mocha’s <code>.travis.yml</code> looked like (with irrelevant sections removed):</p>
<pre><code class="language-yaml">language: node_js

matrix:
  fast_finish: true
  include:
    - node_js: '9'
      env: TARGET=test.node COVERAGE=true
    - node_js: '8'
      env: TARGET=test.node
    - node_js: '6'
      env: TARGET=test.node
    - node_js: '4'
      env: TARGET=test.node
    - node_js: '8'
      env: TARGET=lint
    - node_js: '8'
      env: TARGET=test.browser

before_install: scripts/travis-before-install.sh
before_script: scripts/travis-before-script.sh
script: npm start $TARGET
after_success: npm start coveralls

addons:
  artifacts:
    paths:
      - .karma/
      - ./mocha.js
  sauce_connect: true
  chrome: stable
cache:
  directories:
    - ~/.npm
</code></pre>
<p>Much pain!</p>
<ol>
<li><code>fast_finish</code> does nothing unless you have jobs within an <a href="https://docs.travis-ci.com/user/customizing-the-build#Rows-that-are-Allowed-to-Fail"><code>allowed_failures</code></a> mapping; Mocha does not.</li>
<li>We can optimize caching and installation of dependencies:
<ol>
<li>Each job has its own cache due to the use of environment variables, but (most) every job sharing a Node.js version should share a cache.</li>
<li><code>node_modules</code> isn’t cached in the cases where it should be.</li>
<li><code>npm ci</code> <a href="http://blog.npmjs.org/post/171556855892/introducing-npm-ci-for-faster-more-reliable">is now a thing</a></li>
<li>Mocha’s dev dependencies include a fair number of native modules, which we <em>don’t always need</em></li>
</ol>
</li>
<li>Browser tests create artifacts (essentially bundles created by <a href="https://npm.im/karma-browserify">karma-browserify</a> and the bundled <code>mocha.js</code>—we use these for debugging esoteric failures on <a href="https://saucelabs.com">SauceLabs</a>’ browsers) so the upload paths contain a steaming pile of nothing after most jobs</li>
<li>Likewise, we’re starting <a href="https://docs.travis-ci.com/user/sauce-connect">Sauce Connect</a> and installing headless Chrome for jobs that won’t use it</li>
<li>One (1) job generates any coverage at all, yet <em>every</em> job attempts to send a report to <a href="https://coveralls.io">Coveralls</a></li>
<li>The <a href="https://github.com/mochajs/mocha/blob/1701335be94ed6caf3a9ad644bd694a5e8fc4bd0/scripts/travis-before-install.sh"><code>before_install</code> script</a> runs some smoke tests, but duplicates effort by running three (3) times in Node.js 8</li>
<li>The <a href="https://github.com/mochajs/mocha/blob/1701335be94ed6caf3a9ad644bd694a5e8fc4bd0/scripts/travis-before-script.sh"><code>before_script</code> script</a> creates <code>.karma/</code>, but Travis CI’s cache creates it first.</li>
</ol>
<p>Many of these issues are due to my own ignorance, as <code>git blame</code> will tell you.  <em>Those</em> problems can be solved by actually reading <a href="https://docs.travis-ci.com/">Travis CI’s documentation</a> like I should have; we can solve the rest with Build Stages.</p>
<h2 id="mochasnewbuildusingbuildstages">Mocha’s New Build using Build Stages</h2>
<p>I’ll analyze the new <code>.travis.yml</code> in parts for better context (you can <a href="https://github.com/mochajs/mocha/blob/master/.travis.yml">see it in its entirety</a>, if you wish).</p>
<h3 id="definingthebuildstageorder">Defining the Build Stage Order</h3>
<p>It’s optional, but a project will <em>usually</em> want to run stages <em>in order</em> (to enable fast failure, and tasks like <a href="https://docs.travis-ci.com/user/conditional-builds-stages-jobs/">conditional deploys</a>).</p>
<pre><code class="language-yaml">stages:
  - smoke # this ensures a &quot;user&quot; install works properly
  - precache # warm up cache for default Node.js version
  - lint # lint code and docs
  - test # all tests
</code></pre>
<p>In the <code>smoke</code> stage, Mocha runs its smoke (or “sanity”) tests. We want to do this stage <em>first</em>, because if it fails, somebody screwed up big. Doing further work would be a waste.</p>
<p>The <code>precache</code> stage, then, installs dependencies which multiple jobs in subsequent Build Stages will reuse.  The single job in the <code>lint</code> stage will hit this cache, as well as two other jobs in the <code>test</code> stage.</p>
<h3 id="defaultjobconfiguration">Default Job Configuration</h3>
<p>At the top level of <code>.travis.yml</code>, we can establish some defaults.  Jobs in Build Stages will use these <em>unless</em> they override the options.</p>
<pre><code class="language-yaml">language: node_js
node_js: '9'
</code></pre>
<p>Jobs won’t change the project’s language, which is <code>node_js</code>.</p>
<p>Mocha runs its complete test suite against <em>all</em> maintained LTS versions of Node.js, in addition to the “current” release.  At the time of this writing, those versions are 4.x, 6.x, 8.x and 9.x—these are the <em>only</em> versions of Node.js which Mocha supports!  The Build Stages contain jobs which don’t depend on any specific version (lint checks and browser tests), so we may as well use the latest—and often, fastest—Node.js version.</p>
<p>Warning: rabbit hole ahead.</p>
<p><img src="https://boneskull.com/content/images/2018/04/5581645097_5c8d823705_z.jpg" alt="Optimizing Mocha's Builds with Travis CI's Build Stages"> <small>Photo by <a href="https://flickr.com/photos/tatiana-gettelman">Tatiana Gettelman</a> / <a href="https://flickr.com">Flickr</a></small></p>
<h4 id="efficientinstallationofnpmv580">Efficient Installation of <code>npm</code> v5.8.0</h4>
<p>At the time of this writing, the version of default <code>npm</code> that ships with 8.x and 9.x is v5.6.0.  <code>npm ci</code> wasn’t available before v5.8.0, which is the latest version.</p>
<p>Travis CI manages its Node.js versions with <a href="https://github.com/creationix/nvm">nvm</a>.  Travis CI <em>also</em> runs <code>nvm install &lt;version&gt;</code> <em>before</em> it reaches into its cache.  That means, to use v5.8.0 of <code>npm</code>, a build would naively do this:</p>
<pre><code class="language-yaml">before_script: npm install -g npm@5.8.0
</code></pre>
<p>That’ll run for <em>every job</em>, which is slow.  A build could attempt to cache some subset of <code>~/.nvm</code> itself (where <code>nvm</code> keeps its installed Node.js versions and globally-installed packages, including <code>npm</code>), but that’s going to contain the <code>node</code> executable and other cruft.  <em>Anyway</em>, trying to cache whatever <code>npm install -g npm</code> installed is a dead-end, at the time of this writing.  But there’s another way.</p>
<blockquote>
<p>If Travis CI’s caching worked with individual files—or supported exclusion—this solution would be more viable.  Its granularity stops at the directory level.</p>
</blockquote>
<p>Here’s what we do (and these are <em>job defaults</em>, remember):</p>
<pre><code class="language-yaml">before_install: |
  [[ ! -x ~/npm/node_modules/.bin/npm ]] &amp;&amp; {
    cd ~/npm &amp;&amp; npm install npm
    cd -
  } || true
env: PATH=~/npm/node_modules/.bin:$PATH
cache:
  directories:
    - ~/.npm # cache npm's cache
    - ~/npm # cache latest npm
</code></pre>
<ol>
<li>Our <code>before_install</code> Bash script checks for the executability (executableness?) of a file, <code>~/npm/node_modules/.bin/npm</code>.  Note that <code>~/.npm</code> <em>is not</em> <code>~/npm</code>.</li>
<li>If <code>~/npm/node_modules/.bin/npm</code> is not executable, we assume this was a <em>cache miss</em>.  Navigate into <code>~/npm</code> and use <code>npm</code> to install the latest version of <code>npm</code> local to this directory.  After this step, <code>~/npm</code> will contain <code>node_modules</code> and <code>package-lock.json</code>.</li>
<li>We must navigate <em>back</em> to our working copy after navigating away, which is what <code>cd -</code> does.</li>
<li>If a job hits the cache, our <code>npm</code> executable is ready, and the <code>script</code> ends with great success (<code>true</code>).</li>
<li>We set the <code>PATH</code> to look in <code>~/npm/node_modules/.bin/</code> before anything else, so it finds our custom <code>npm</code> executable instead of the one <code>nvm</code> installed.</li>
<li>We cache <code>~/.npm</code>, which is <code>npm</code>’s cache.  Right?  Right.</li>
<li>We cache <code>~/npm</code>, which contains our custom-installed <code>npm</code>.</li>
</ol>
<p>The point of this madness is to avoid an <code>npm</code> self-upgrade on every job.  Not pretty, but it works.</p>
<h4 id="usingnpmci">Using <code>npm ci</code></h4>
<p>By keeping the installation separate from the working copy, we avoid extraneous dependencies.  Why is this important?  Because:</p>
<pre><code class="language-yaml">install: npm ci --ignore-scripts
</code></pre>
<p>Armed with <code>npm</code> v5.8.0, we can use <code>npm ci</code> instead of <code>npm install</code> (in most cases; I’d wager the majority of projects looking to use <code>npm ci</code> should <em>always</em> use it).  One reason <code>npm ci</code>offers better consistency is that it blasts the local <code>node_modules</code> and re-creates it from scratch.  That means we can’t throw anything in there (<code>npm</code> is not a direct dependency of Mocha) that doesn’t belong.</p>
<p>Mocha uses the <code>--ignore-scripts</code> flag in most of its jobs.  <code>npm</code>’s lifecycle scripts invoke native module compilation; Mocha consumes native modules when building docs or running browser tests.  We don’t test the doc-building scripts themselves, so that leaves a <em>single</em> case where Mocha needs a native module to run a test suite.</p>
<p>This situation isn’t unique to Mocha, but neither is it ubiquitous.   <em>Any</em> given dependency of a project may use an <code>install</code>, <code>postinstall</code>, or an infamous <code>prepublish</code> script.  Because most of Mocha’s dependencies <em>don’t</em> do this (thanks, dependencies!) , we can get away with it.</p>
<blockquote>
<p>I don’t know of any way to tell <code>npm</code> to <em>only</em> avoid compilation of native modules; it’s either “ignore all scripts” or “run all scripts.”</p>
</blockquote>
<p>After we have established the <em>default</em> behavior, we can define our jobs.  Let’s analyze each stage.</p>
<h3 id="buildstage1smoke">Build Stage 1: Smoke</h3>
<p>As noted above, our stages run in order (but the jobs within them run concurrently), and the <code>smoke</code> stage is the first.  Here’s its definition:</p>
<pre><code class="language-yaml">- &amp;smoke
  stage: smoke
  install: npm install --production --no-shrinkwrap
  script: &gt;
    ./bin/mocha --opts /dev/null --reporter spec 
    test/sanity/sanity.spec.js
  cache:
    directories:
      - ~/.npm
      - node_modules

- &lt;&lt;: *smoke
  node_js: '8'

- &lt;&lt;: *smoke
  node_js: '6'

- &lt;&lt;: *smoke
  node_js: '4'
</code></pre>
<p>If you’re unfamiliar with anchors and aliases in YAML (like I was), well… there it is.</p>
<h4 id="anasideyamlanchorsaliases">An Aside: YAML Anchors &amp; Aliases</h4>
<p>It defines an anchor, <code>&amp;smoke</code>.  We can then refer to this anchor using an alias, <code>*smoke</code>.  The <code>&lt;&lt;:</code> syntax means the mapping will be <em>merged</em> with the mapping from the anchor.  JavaScripters could think of it like this:</p>
<pre><code class="language-js">const baseSmoke = {
	stage: 'smoke',
	install: 'npm install --production --no-shrinkwrap'
	// etc
};

const smokeStage = [
	baseSmoke, 
  Object.assign({}, baseSmoke, {node_js: '8'}),
  Object.assign({}, baseSmoke, {node_js: '6'}),
  Object.assign({}, baseSmoke, {node_js: '4'})
];
</code></pre>
<p>If we serialize <code>smokeStage</code> into JSON, this is the result (I apologize for the verbosity, but this is why a hypothetical <code>.travis.json</code> would suck):</p>
<pre><code class="language-json">[
  {
    &quot;stage&quot;: &quot;smoke&quot;,
    &quot;install&quot;: &quot;npm install --production --no-shrinkwrap&quot;
  },
  {
    &quot;stage&quot;: &quot;smoke&quot;,
    &quot;install&quot;: &quot;npm install --production --no-shrinkwrap&quot;,
    &quot;node_js&quot;: &quot;8&quot;
  },
  {
    &quot;stage&quot;: &quot;smoke&quot;,
    &quot;install&quot;: &quot;npm install --production --no-shrinkwrap&quot;,
    &quot;node_js&quot;: &quot;6&quot;
  },
  {
    &quot;stage&quot;: &quot;smoke&quot;,
    &quot;install&quot;: &quot;npm install --production --no-shrinkwrap&quot;,
    &quot;node_js&quot;: &quot;4&quot;
  }
]
</code></pre>
<p>As you can see, we have our <a href="#default-job-configuration">top-level defaults</a>, but we can <em>also</em> define defaults within individual stages via YAML voodoo.</p>
<h4 id="whysmoketestanyway">Why Smoke Test, Anyway?</h4>
<p><img src="https://images.unsplash.com/photo-1507680465142-ef2223e23308?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=b6fd5156371ae60a72037b27d1d34a12" alt="Optimizing Mocha's Builds with Travis CI's Build Stages"><br>
<small>Photo by <a href="https://unsplash.com/@marcus_kauffman?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Marcus Kauffman</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></small></p>
<p>The goal is to <em>establish a baseline</em> of functionality in a <em>minimal amount of time</em>.  This baseline will be different for every project!  In Mocha’s case, we want to minimize the likelihood a <code>npm install mocha</code> somehow misses a dependency.</p>
<blockquote>
<p>If you’re wondering, <strong>yes</strong>, this has happened—a dependency was living in <code>package.json</code>’s <code>devDependencies</code> when it should have been in <code>dependencies</code>.  Raise your hand if you’ve done that before.</p>
</blockquote>
<p>Since we can’t <code>npm install mocha</code>, the next best thing is running <code>npm install --production --no-shrinkwrap</code> in the working copy.  This mimics the result a user would get; we don’t install Mocha’s development dependencies, <em>and</em> we ignore <code>package-lock.json</code> (<a href="https://docs.npmjs.com/files/package-lock.json">read more</a> about <code>package-lock.json</code>; <code>npm</code> never publishes it to the registry).</p>
<blockquote>
<p>Another way to do this could be to <code>npm install</code> the current changeset directly from GitHub, but we already have the working copy cloned.  And as of <code>npm</code> v5.x,  running <code>npm install /path/to/working/copy</code> results in a symlink, so that’s not workable.</p>
</blockquote>
<p>Once <code>npm</code> has installed Mocha’s production dependencies, the <code>bin/mocha</code> executable should run a simple test like:</p>
<pre><code class="language-js">describe('a production installation of Mocha', function () {
  it('should be able to execute a test', function () {
    assert.ok(true);
  });
}); 
</code></pre>
<blockquote>
<p>Because Mocha’s own <code>mocha.opts</code> contains references to development dependencies, we must ignore it; the flag <code>--opts /dev/null</code> is a hacky workaround which effectively disables <code>mocha.opts</code>.  Normally, I’d put this kind of nonsense in <code>package-scripts.js</code> and let <code>nps</code> run it, but we don’t have <code>nps</code> at our disposal here.  Though <code>npx</code> could…</p>
</blockquote>
<p>If that simple test runs OK—for each supported version of Node.js—Mocha has passed its smoke tests, and we can move on to the <code>precache</code> stage.</p>
<h3 id="buildstage2precache">Build Stage 2: Pre-cache</h3>
<p>What is it?  It is this:</p>
<pre><code class="language-yaml">- stage: precache
  script: true
</code></pre>
<p>We run everything in the default configuration <em>except</em> an actual build script; <code>true</code> is POSIX shell for “it worked.”  That’s enough to create a “warmed-up” cache of our development dependencies for Node.js v9.x, <em>as well as</em> a cache of the latest <code>npm</code> version.</p>
<h3 id="buildstage3lint">Build Stage 3: Lint</h3>
<p>The <code>lint</code> stage is the first to hit our pre-cached dependencies.  The latest <code>npm</code> is already present, and all the dependencies live in <code>npm</code>’s cache.  We get an even faster install (<code>npm ci</code>, if you recall) since we can ignore scripts.</p>
<p>Since running linter(s) is far less time-consuming than running tests, we may as well run these before Real Tests.</p>
<p>Like <code>precache</code>, this Build Stage has one job:</p>
<pre><code class="language-yaml">- stage: lint
  script: npm start lint
</code></pre>
<p><code>npm start lint</code> kicks off our linters (<a href="https://eslint.iorg">ESLint</a> and <a href="https://www.npmjs.com/package/markdownlint-cli">markdownlint-cli</a>).</p>
<blockquote>
<p>Some months ago, Mocha dropped the <code>Makefile</code> we were using for <a href="https://kentcdodds.com/">Kent C. Dodds</a>’ wonderful <a href="https://npm.im/nps">nps</a>; <code>npm start</code> calls <code>nps</code>.  Formerly known as <code>p-s</code>, <code>nps</code> is an elegant task runner (no plugins necessary, unlike <a href="http://gruntjs.com">Grunt</a> or <a href="http://gulpjs.com">Gulp</a>). I highly recommend it if your <code>scripts</code> in <code>package.json</code> has become unwieldy.</p>
</blockquote>
<p>Like with smoke tests, if the lint checks fail, then we abort the build.</p>
<h3 id="buildstage4test">Build Stage 4: Test</h3>
<p>Mocha runs its main test suites concurrently in the fourth stage.</p>
<blockquote>
<p>You’ll notice there’s no <code>stage</code> mapping in any of the items below.  If you look at the <a href="https://github.com/mochajs/mocha/blob/master/.travis.yml">entire file</a>, you’ll see that these are the first items in the <code>jobs.include</code> mapping; we must also keep them together.  Jobs lacking a <code>stage</code> will use the same <code>stage</code> as the previous job in the list; if there is no previous job, the default <code>stage</code> is <code>test</code>.  Ahh, the wonders of  convention…</p>
</blockquote>
<p>Like jobs in previous stages, these inherit from the default job configuration.</p>
<h5 id="nodejstests">Node.js Tests</h5>
<p>The first is our test against the default Node.js version (9.x), which also computes coverage:</p>
<pre><code class="language-yaml">- script: COVERAGE=1 npm start test.node
  after_success: npm start coveralls
</code></pre>
<p>You’ll note that the environment variable <code>COVERAGE</code> is <em>not</em> defined in an <code>env</code> mapping.  This is because variables defined in <code>env</code> create a unique cache configuration—it’d bust the pre-cache and we’d miss everything in it!  This environment variable causes our <code>test.node</code> script (found in <a href="https://github.com/mochajs/mocha/blob/master/package-scripts.js">package-scripts.js</a>) to invoke <code>mocha</code> via <a href="https://npm.im/nyc">nyc</a>.</p>
<p>The unique <code>after_success</code> script will fire coverage information generated by <code>nyc</code> to <a href="https://coveralls.io">Coveralls</a> using <a href="https://www.npmjs.com/package/coveralls">node-coveralls</a>.  If any tests fail, <code>after_success</code> does not run, as you might guess.</p>
<p>These next three (3) jobs are identical, except the Node.js version.  These all <em>miss the cache</em>, because we pre-cache Node.js 9.x only (and we don’t use these configurations again).  Since we’re already computing coverage in the previous job, we omit the <code>COVERAGE</code> variable.</p>
<pre><code class="language-yaml">- &amp;node
  script: npm start test.node
  node_js: '8'

- &lt;&lt;: *node
  node_js: '6'

- &lt;&lt;: *node
  node_js: '4'
</code></pre>
<p>The fascinating and frightening YAML anchors (and associated aliases) also appear above; these entries expand exactly as explained earlier.</p>
<p>What’s left?</p>
<h4 id="browsertests">Browser Tests</h4>
<pre><code class="language-yaml">- script: npm start test.bundle test.browser
  install: npm ci
  addons:
    artifacts:
      paths:
        - .karma/
        - ./mocha.js
    chrome: stable
    sauce_connect: true
</code></pre>
<p>Note the custom <code>install</code> script; every <em>other</em> <code>install</code> script uses <code>—ignore-scripts</code>.  This one doesn’t, because it needs some compiled modules to bootstrap headless Chrome.  The <code>chrome</code> addon provides a headless Chrome executable to the job.</p>
<p>This is actually <em>two</em> suites; <code>test.bundle</code> is a test launched via the Node.js <code>mocha</code> executable which ensures the bundle we build (via <a href="https://npm.im/browserify">browserify</a>) retains compatibility with <a href="http://www.requirejs.org">RequireJS</a>.  Then, <a href="https://npm.im/karma">Karma</a> handles the  <code>test.browser</code> suite.</p>
<blockquote>
<p>We abuse <code>NODE_PATH</code> to trick <a href="https://npm.im/karma-mocha">karma-mocha</a> into running our tests with our own <code>mocha.js</code> bundle.  Don't do this.</p>
</blockquote>
<p>When the browser tests run in a local development environment, they run in headless Chrome by default.  On Travis CI, we add handful of “real” browsers, by the grace of <a href="https://saucelabs.com">SauceLabs</a>.</p>
<p>I recommend using <code>sauce_connect</code> instead of giving the wheel to <a href="https://npm.im/karma-sauce-launcher">karma-sauce-launcher</a>; I’ve found Travis CI’s addon considerably more reliable.</p>
<blockquote>
<p>You might wonder why we are using Karma instead of WebDriver.</p>
<p>The answer: these are mainly unit tests.  Mocha’s HTML reporter—which is what you get when you run Mocha in a browser—doesn’t have much of a UI to run functional tests against.  We’re not checking DOM nodes for attributes, so we don’t script a browser.  Though it couldn’t hurt!</p>
</blockquote>
<p>What are the artifacts for?  While SauceLabs provides tooling to debug a test manually, sometimes all you need is the bundle and whatever Karma was running to make sense of a stack trace (these files are manually dumped into <code>.karma/</code> by hooking into <a href="https://npm.im/karma-browserify">karma-browserify</a>).</p>
<p>The files get tossed into a public Amazon S3 bucket, though Travis CI does its best to redact the URLs.  I should mention: ever since Mocha dropped IE8 support, we haven’t had any failures so weird we needed to look at them.  Funny about that.</p>
<h2 id="theaftermath">The Aftermath</h2>
<p>Can a <em>good</em> thing even <em>have</em> an aftermath?</p>
<p>I haven’t crunched the numbers—these changes are super new—but it’s obvious that our builds now <strong>do more</strong> in <strong>less time</strong>.  Typically, the first push to a branch (or PR) will be the slowest to build, and caching will kick in for the next pushes.  We’ll see the greatest performance gain on <em>failed</em> builds.</p>
<blockquote>
<p>So please send broken PRs to Mocha, so I can pad my numbers.</p>
<p>I don't think I mean that.</p>
</blockquote>
<p>After we’ve used the configuration for a three-to-four weeks, I’ll gather up some data, and update this post with an addendum which will <em>delight</em> the reader with fancy charts and graphs and crap like that.</p>
<p>I’ll shut up, so you can start hacking at your <code>.travis.yml</code>.  You’re welcome.</p>
]]></content:encoded></item><item><title><![CDATA[Use Rollup to Bundle JavaScript Actions for Apache OpenWhisk]]></title><description><![CDATA[As serverless functions grow beyond the trivial, we can use Rollup to bundle JS for deployment on OpenWhisk.
]]></description><link>https://boneskull.com/rollup-for-javascript-actions-on-openwhisk/</link><guid isPermaLink="false">5ab96bdab4b1760603db9f43</guid><category><![CDATA[openwhisk]]></category><category><![CDATA[rollup]]></category><category><![CDATA[node.js]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Wed, 28 Mar 2018 20:13:36 GMT</pubDate><media:content url="https://images.unsplash.com/15/swirl.JPG?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=c37b98eb557982954ccd4c6f1b4995c4" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/15/swirl.JPG?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ&s=c37b98eb557982954ccd4c6f1b4995c4" alt="Use Rollup to Bundle JavaScript Actions for Apache OpenWhisk"><p>Like other “serverless” platforms, an <a href="https://openwhisk.apache.org/">OpenWhisk</a> JavaScript Action may be a single <code>.js</code> file. As functions grow beyond trivial—and begin to depend on third-party modules—that single, forlorn <code>.js</code> file can no longer shoulder the burden.</p>
<p>The <a href="https://github.com/apache/incubator-openwhisk/blob/master/docs/actions.md#packaging-an-action-as-a-nodejs-module">OpenWhisk Documentation</a> suggests simply throwing an entire project—<code>node_modules</code> and all—into a <code>.zip</code> file.  Of course, that’s wasteful and silly—especially if most of <code>node_modules</code> is full of <em>development</em> dependencies.</p>
<p>Now, I’m not one to <em>read documentation</em> unless I get stuck, so I didn’t realize that the docs present <a href="https://github.com/apache/incubator-openwhisk/blob/master/docs/actions.md#package-an-action-as-a-single-bundle">an alternative</a>: bundle an Action with <a href="https://webpack.js.org">webpack</a>.  Faced with the unpalatable task of zipping up my entire project dir, I knew I wanted to bundle, but I didn’t reach for webpack—I used <a href="https://rollupjs.org">Rollup</a>.</p>
<h2 id="whyrollup">Why Rollup?</h2>
<p><a href="https://medium.com/webpack/webpack-and-rollup-the-same-but-different-a41ad427058c">In this article</a>, Rich Harris (the author of Rollup) writes that webpack’s scope is to help bundle single-page applications (SPAs).  Rollup, on the other hand:</p>
<blockquote>
<p>“Rollup was created for a different reason: to build flat distributables of JavaScript libraries as efficiently as possible, taking advantage of the ingenious design of ES2015 modules.”</p>
<p>—Rich Harris</p>
</blockquote>
<p>The key for us in the above quote is “flat distributable.”  We want to upload our Action as a single <code>.js</code> file.  That is precisely what Rollup provides—with little extra ornamentation.</p>
<p>Rollup doesn’t do stuff like <a href="https://webpack.js.org/guides/code-splitting/">code splitting</a> nor <a href="https://webpack.js.org/concepts/hot-module-replacement/">hot module replacement</a>.  We don’t need these features <em>anyway</em>, since we’re not bundling SPAs.  Heck, we don’t even need ES modules all the way down (though we won’t get all of the benefits of Rollup’s <a href="https://en.wikipedia.org/wiki/Tree_shaking">tree-shaking</a> abilities)—its plugin ecosystem has us covered.</p>
<p>Read on for an example configuration.</p>
<h2 id="anexampleactionusingrollup">An Example Action Using Rollup</h2>
<p>Here’s the lovely example action which the OpenWhisk docs provide.  It’s intended to be uploaded within <code>.zip</code> file <em>which also includes <code>node_modules</code> and everything else</em> in its project folder:</p>
<pre><code class="language-js">function myAction(args) {
  const leftPad = require(&quot;left-pad&quot;)
  const lines = args.lines || [];
  return { padded: lines.map(l =&gt; leftPad(l, 30, &quot;.&quot;)) }
}

exports.main = myAction;
</code></pre>
<p>Similarly to OpenWhisk's webpack example, the above needs slight modification to work with Rollup.  Let’s write a naïve conversion.  Create <code>index.js</code>:</p>
<pre><code class="language-js">import leftPad from 'left-pad';

function myAction(args) {
  const lines = args.lines || [];
  return { padded: lines.map(l =&gt; leftPad(l, 30, &quot;.&quot;)) }
}

export const main = myAction;
</code></pre>
<p>This may seem a little weird, but note that OpenWhisk executes our <code>.js</code> file as if it were at the top level (no, I’m not sure why).  The webpack example in OpenWhisk’s docs make this explicit by assigning the function to <code>global.main</code>; <code>const main = myAction</code> is equivalent.</p>
<p><em>However</em>, since Rollup aggressively tree-shakes, casually assigning <code>myAction</code> to an unused variable is <em>verboten</em>, and <code>myAction</code> would be trashed.  This is also why we can’t just write <code>export {myAction as main}</code>; it doesn’t create the global variable converted to CommonJS module format by Rollup.  To address this, just <code>export</code> <code>main</code>; we can use it later when we write our tests!</p>
<p>Even though <em>our dependencies</em> don’t need to use ES modules, <em>our sources do</em>, so we <code>import</code> <code>left-pad</code> at the top.  Then, <code>myAction</code> will become the default export.  We’ll see how this works in the Rollup config below.</p>
<blockquote>
<p>Need a refresher on ES modules?  I recommend the <a href="http://exploringjs.com/es6/ch_modules.html">modules chapter</a> of Dr. Axel Rauschmayer’s excellent book, <a href="http://exploringjs.com/es6.html">Exploring ES6</a>.</p>
</blockquote>
<p>If we don’t yet have a <code>package.json</code>, we can create one via <code>npm init -y</code> or copy/paste:</p>
<pre><code class="language-json">{
  &quot;name&quot;: &quot;my-action&quot;
}
</code></pre>
<p>Then, install <code>rollup</code>, and <code>left-pad</code> (assuming <code>npm</code> v5.0.0 or newer):</p>
<pre><code class="language-bash">$ npm i rollup@^0.57.1 -D

+ rollup@0.57.1
added 56 packages from 109 contributors in 2.725s

$ npm i left-pad@^1.2.0

+ left-pad@1.2.0
added 1 package from 1 contributor in 1.222s
</code></pre>
<blockquote>
<p>The versions of <code>rollup</code> and <code>left-pad</code> above, and any subsequent versions, are intended to future-proof this tutorial.  We may be able to just use the latest versions of any of these; YMMV.</p>
</blockquote>
<p>By default, Rollup looks for a config file in <code>rollup.config.js</code>, so let’s create that now:</p>
<pre><code class="language-js">// notice: this is an ES module
export default {
  input: 'index.js',
  output: {
    file: 'dist/my-action.js',
    format: 'cjs'
  }
};
</code></pre>
<p>This configuration declares we will use CommonJS (Node.js-style; <code>require()</code> and <code>module.exports</code>, <code>exports</code>, etc.).</p>
<blockquote>
<p>CommonJS (<code>cjs</code>) format isn’t actually required by OpenWhisk, as an IIFE or UMD bundle would work, but actually to suppress annoying warnings; if we <em>don’t</em> use <code>cjs</code>, it will assume we are bundling for a browser and take exception to what we’re trying to do.</p>
</blockquote>
<p>Invoke Rollup now, and see the warning it generates:</p>
<pre><code class="language-bash">$ node_modules/.bin/rollup -c

index.js → dist/my-action.js...
(!) Unresolved dependencies
https://github.com/rollup/rollup/wiki/Troubleshooting#treating-module-as-external-dependency
left-pad (imported by index.js)
created dist/my-action.js in 19ms
</code></pre>
<p>In other words, Rollup doesn’t know what to do with <code>left-pad</code>.  In fact, it’s just going to assume it can be retrieved via <code>require()</code>!  If we dump the contents of <code>dist/my-action.js</code>, we see:</p>
<pre><code class="language-js">'use strict';

Object.defineProperty(exports, '__esModule', { value: true });

function _interopDefault (ex) { return (ex &amp;&amp; (typeof ex === 'object') 
  &amp;&amp; 'default' in ex) ? ex['default'] : ex; }

var leftPad = _interopDefault(require('left-pad'));

function myAction(args) {
  const lines = args.lines || [];
  return { padded: lines.map(l =&gt; leftPad(l, 30, &quot;.&quot;)) }
}

const main = myAction;

exports.main = myAction;
</code></pre>
<p>Rollup has converted our ES module to Node-style, “CommonJS” exports, which is good.  But…</p>
<p>We wanted to bundle <code>left-pad</code>, yet it didn’t happen; our bundle calls <code>require('left-pad')</code> like it’s in a <code>node_modules/</code> somewhere.  This won’t do; we <em>only</em> want to upload <code>dist/my-action.js</code>, and <em>no part</em> of <code>node_modules/</code>.  <em>Where’s the beef?</em></p>
<p>Other than the fact it <a href="https://www.theregister.co.uk/2016/03/23/npm_left_pad_chaos/">broke the internet</a>, <code>left-pad</code> has two problems:</p>
<ol>
<li><code>left-pad</code> was installed by <code>npm</code> and lives in <code>node_modules/left-pad</code>.</li>
<li><code>left-pad</code> uses CommonJS exports.</li>
</ol>
<p>Rollup makes few assumptions about our environment; indeed, both of the above cases necessitate a plugin.  Let’s install them now:</p>
<pre><code class="language-bash">$ npm i rollup-plugin-commonjs@^9.1.0 rollup-plugin-node-resolve@^3.3.0 -D

+ rollup-plugin-commonjs@9.1.0
+ rollup-plugin-node-resolve@3.3.0
added 9 packages from 6 contributors in 2.037s
</code></pre>
<p>Modify <code>rollup.config.js</code> as seen here:</p>
<pre><code class="language-js">import commonjs from 'rollup-plugin-commonjs';
import resolve from 'rollup-plugin-node-resolve';

export default {
  input: 'index.js',
  output: {
    file: 'dist/my-action.js',
    format: 'cjs'
  },
  plugins: [resolve(), commonjs()]
};
</code></pre>
<p>If we try bundling again, the warning is gone…</p>
<pre><code class="language-bash">$ node_modules/.bin/rollup -c

index.js → dist/my-action.js...
created dist/my-action.js in 48ms
</code></pre>
<p>…and if we view <code>dist/my-action.js</code>, we will see that the entirety of <code>left-pad</code> is included.  Neat!  We can then deploy the action via:</p>
<pre><code class="language-bash">$ wsk action create my-action dist/my-action.js
</code></pre>
<p>Or, if we’re using the <a href="https://console.bluemix.net/docs/cli/index.html">IBM Cloud CLI</a> (formerly Bluemix CLI):</p>
<pre><code class="language-bash">$ bx wsk action create my-action dist/my-action.js
</code></pre>
<p>Next, I’ll discuss a few common recipes I've found useful.</p>
<h2 id="openwhiskrolluprecipes">OpenWhisk &amp; Rollup Recipes</h2>
<p><img src="https://images.unsplash.com/photo-1509358271058-acd22cc93898?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=c29902c006f13d71779eb7dd70c66695" alt="Use Rollup to Bundle JavaScript Actions for Apache OpenWhisk"><br>
<small>Photo by <a href="https://unsplash.com/@pratiksha_mohanty?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Pratiksha Mohanty</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></small></p>
<p>Here are some problems (and solutions, thankfully) I’ve encountered while experimenting with Rollup and OpenWhisk.</p>
<h3 id="usingbuiltinnodejsmodules">Using Built-In Node.js Modules</h3>
<p>What if we wanted some extra logging?  Node.js’ <a href="https://nodejs.org/api/util.html#util_util_inspect_object_options"><code>util.inspect()</code></a> is super handy; we can use it to print only a few items from our <code>lines</code> array:</p>
<pre><code class="language-js">import leftPad from 'left-pad';
import {inspect} from 'util';

function myAction(args) {
  const lines = args.lines || [];
  console.log(inspect(lines, { maxArrayLength: 10 }));
  return { padded: lines.map(l =&gt; leftPad(l, 30, &quot;.&quot;)) }
}

export const main = myAction;
</code></pre>
<p>Right?</p>
<p>Nope nope nope:</p>
<pre><code class="language-bash">$ node_modules/.bin/rollup -c

index.js → dist/my-action.js...
(!) Unresolved dependencies
https://github.com/rollup/rollup/wiki/Troubleshooting#treating-module-as-external-dependency
util (imported by index.js)
created dist/my-action.js in 44ms
</code></pre>
<p>Fortunately, this is straightforward to fix.  We add <code>util</code> to the <code>external</code> <code>Array</code> in <code>rollup.config.js</code>:</p>
<pre><code class="language-js">import commonjs from 'rollup-plugin-commonjs';
import resolve from 'rollup-plugin-node-resolve';

export default {
  input: 'index.js',
  output: {
    file: 'dist/my-action.js',
    format: 'cjs'
  },
  plugins: [resolve(), commonjs()],
  external: ['util']
};
</code></pre>
<p>If I end up using <em>many</em> builtin modules, I like to save myself some trouble and use <a href="https://npm.im/rollup-plugin-auto-external">rollup-plugin-auto-external</a>.</p>
<h3 id="consumingjson">Consuming JSON</h3>
<p>What if we want to read a <code>.json</code> file?  We might write this:</p>
<pre><code class="language-js">import leftPad from 'left-pad';
import {inspect} from 'util';
import {version} from './package.json';

function myAction(args) {
  const lines = args.lines || [];
  console.log(inspect(lines, { maxArrayLength: 10 }));
  return { padded: lines.map(l =&gt; leftPad(l, 30, '.')), version };
}

export const main = myAction;
</code></pre>
<p>But that would fail!</p>
<pre><code class="language-bash">$ node_modules/.bin/rollup -c

index.js → dist/my-action.js...
[!] Error: Unexpected token
package.json (2:8)
1: {
2:   &quot;name&quot;: &quot;openwhisk-rollup&quot;,
           ^
3:   &quot;version&quot;: &quot;1.0.0&quot;,
4:   &quot;description&quot;: &quot;&quot;,
</code></pre>
<p>Shock!  Rollup expects <em>JavaScript</em> files?!  The solution is to pull in <a href="https://npm.im/rollup-plugin-json">rollup-plugin-json</a>:</p>
<pre><code class="language-bash">$ npm install rollup-plugin-json@^2.3.0

+ rollup-plugin-json@2.3.0
added 1 package in 1.629s
$ node_modules/.bin/rollup -c

index.js → dist/my-action.js...
created dist/my-action.js in 41ms
</code></pre>
<p>Instead of just embedding the entire file, it will grab whatever portion of the <code>.json</code> file we <em>actually use</em>, effectively tree-shaking the JSON itself.  In <code>dist/my-action.js</code>, we’ll see:</p>
<pre><code class="language-js">var version = &quot;1.0.0&quot;;
</code></pre>
<p>Good work, <code>rollup-plugin-json</code>.</p>
<h3 id="thirdpartymodulesintheenvironment">Third-Party Modules in The Environment</h3>
<p>IBM provides many commonly used modules in its OpenWhisk service, <a href="https://www.ibm.com/cloud/functions">IBM Cloud Functions</a>.  The list of these <em>pre-installed</em> modules which Actions can use is found <a href="https://console.bluemix.net/docs/openwhisk/openwhisk_reference.html#openwhisk_ref_javascript">in the documentation</a>.</p>
<p>Practically speaking, this means <em>we don’t need to bundle these</em> modules.  They can be ignored, just like a built-in, as shown above.  We add any which we're using to the <code>external</code> <code>Array</code> in <code>rollup.config.js</code>.</p>
<blockquote>
<p>It's still helpful to <code>npm install</code> any of these which we’re using (for code completion, testing, etc.).  They must be added to the <code>external</code> <code>Array</code>, regardless.</p>
</blockquote>
<p>Alternatively, use <a href="https://npm.im/rollup-plugin-auto-external">rollup-plugin-auto-external</a> with option <code>{dependencies: false}</code>; then add the modules (as <a href="https://www.npmjs.com/package/minimatch">minimatch</a> globs) which we <em>do</em> want to bundle to the <code>Array</code> <code>include</code> property of the <code>commonjs</code> plugin’s configuration object.  Here’s an example of a Rollup config where we consume the pre-installed <a href="https://npm.im/request-promise">request-promise</a> module, but exclude it from the bundle:</p>
<pre><code class="language-js">import resolve from 'rollup-plugin-node-resolve';
import commonjs from 'rollup-plugin-commonjs';

export default {
  // (imagine there is more configuration here)
  plugins: [
    resolve(),
    commonjs({
      // left-pad is the ONLY dependency which is bundled
      include: ['node_modules/left-pad/**']
    }),
    autoExternal({
      // continue to exclude &quot;util&quot;, or any other built-in
      builtins: true,
      // default is true, but we still must bundle left-pad, so this is false.
      dependencies: false
    })
  ],
  // request-promise is a dependency in package.json
  external: ['request-promise']
}
</code></pre>
<h3 id="smooshingthebundle">Smooshing The Bundle</h3>
<p><img src="https://boneskull.com/content/images/2018/03/squeeze.jpg" alt="Use Rollup to Bundle JavaScript Actions for Apache OpenWhisk"> <small>Photo by <a href="https://www.flickr.com/photos/dolmansaxlil/">Sharon Drummond</a> / <a href="https://www.flickr.com">Flickr</a></small></p>
<p>If we follow <a href="https://github.com/apache/incubator-openwhisk/blob/master/docs/actions.md#package-an-action-as-a-single-bundle">OpenWhisk's documentation on using webpack</a> to the letter, our resulting bundle will be <em>minified</em>.</p>
<p>In smaller bundles, this <em>ain’t matter</em>.  But with larger bundles, we should shave some milliseconds off of startup time.</p>
<p>To minify with Rollup, install <a href="https://npm.im/rollup-plugin-uglify">rollup-plugin-uglify</a>:</p>
<pre><code class="language-bash">$ npm i rollup-plugin-uglify -D
</code></pre>
<p>Then:</p>
<pre><code class="language-js">import uglify from 'rollup-plugin-uglify';
import resolve from 'rollup-plugin-node-resolve';
import commonjs from 'rollup-plugin-commonjs';

export default {
  // (imagine there is more configuration here)
  plugins: [
    resolve(),
    commonjs(),
    uglify()
  ]
}
</code></pre>
<p>We can squeeze more bytes out of this by providing options to the <code>uglify()</code> function.  By default, it doesn’t mangle top-level variable names.  This is how we’d do that:</p>
<pre><code class="language-js">plugins: [
  resolve(),
	commonjs(),
  uglify({
    compress: {
      toplevel: true
    }
  })
]
</code></pre>
<p>See the <a href="https://github.com/mishoo/UglifyJS2#minify-options">UglifyJS docs</a> for more options.</p>
<h2 id="obligatorywrapup">Obligatory Wrap-Up</h2>
<p>Let me remind the reader what the reader read:</p>
<ul>
<li>We learned the difference between <a href="https://rollupjs.org">Rollup</a> and <a href="https://webpack.js.org">webpack</a></li>
<li>We learned how to use Rollup to bundle an <a href="https://openwhisk.apache.org/">OpenWhisk</a> Action</li>
<li>We learned how to
<ul>
<li>Use built-in <a href="https://nodejs.org">Node.js</a> modules</li>
<li>Bundle JSON files</li>
<li>Consume pre-installed third-party modules</li>
<li>Minify our bundle</li>
</ul>
</li>
</ul>
<p>So far, I’ve found Rollup works <em>just as well</em> as webpack for OpenWhisk Action deployment.  I don’t see a clear winner, other than they both beat <code>.zip</code> files.  Until I do, I’ll probably continue using Rollup, just ‘cause.  What I’d <em>really</em> like to see is a zero-configuration, purpose-built bundler for Node.js OpenWhisk actions.  Hmmm…</p>
]]></content:encoded></item><item><title><![CDATA[VSCode for WebStorm Users]]></title><description><![CDATA[As of late, if I’m watching a presentation, and someone is writing code in an editor, that editor is almost always VSCode.    

Something’s up, and I’m going to get to the bottom of it.]]></description><link>https://boneskull.com/vscode-for-webstorm-users/</link><guid isPermaLink="false">5aa1cdc5b4b1760603db9f30</guid><category><![CDATA[vscode]]></category><category><![CDATA[jetbrains]]></category><category><![CDATA[webstorm]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Fri, 09 Mar 2018 15:00:00 GMT</pubDate><media:content url="https://boneskull.com/content/images/2018/03/webstorm-vscode-2.png" medium="image"/><content:encoded><![CDATA[<img src="https://boneskull.com/content/images/2018/03/webstorm-vscode-2.png" alt="VSCode for WebStorm Users"><p>I love <a href="https://en.wikipedia.org/wiki/JetBrains">JetBrains</a>’ IDEs.  I’ve been a faithful user since <a href="https://en.wikipedia.org/wiki/PyCharm">PyCharm</a>’s release, seven years ago.</p>
<p>As of late, if I’m watching a presentation, and someone is writing code in an editor, that editor is <em>almost always</em> VSCode.</p>
<p>Something’s up, <em>and I’m going to get to the bottom of it</em>.  People rave about this thing.</p>
<p>I'll answer some questions for myself—and, with luck, maybe I can save the JetBrains-faithful some time and energy.  I aim to discover:</p>
<ul>
<li>Does it support my key bindings, or will I need to relearn everything?</li>
<li>What’s the analog of a “Run Configuration?”</li>
<li>What’s debugging look like?  How’s the source map support?</li>
<li>How easy is it to configure?</li>
<li>How’s the extension ecosystem?</li>
<li>How does the VCS (Git) integration differ?</li>
<li>What’s the story on inline errors or warnings?</li>
<li>How smart is it about types and code completion?</li>
</ul>
<p>I’ll be looking at this from the standpoint of a JavaScript developer, so I’ll write “WebStorm,” but I really mean “a JetBrains IDE.”</p>
<blockquote>
<p>I’m certainly interested in how VSCode handles Python and C/C++, but I’m not going to explore it in this post.</p>
</blockquote>
<h2 id="firstimpressions">First Impressions</h2>
<p>I used <a href="https://caskroom.github.io/">Homebrew Cask</a> to install it:</p>
<pre><code class="language-bash">$ brew cask install visual-studio-code
==&gt; Satisfying dependencies
==&gt; Downloading https://az764295.vo.msecnd.net/stable/f88bbf9137d24d36d968ea6b2911786bfe103002/VSCode-darwin-stable.zip
==&gt; Verifying checksum for Cask visual-studio-code
==&gt; Installing Cask visual-studio-code
==&gt; Moving App 'Visual Studio Code.app' to '/Users/boneskull/Applications/Visual Studio Code.app'.
==&gt; Linking Binary 'code' to '/usr/local/bin/code'.
🍺  visual-studio-code was successfully installed!
</code></pre>
<p>This takes ~4s on my 2016 MBP—but I don’t have any extensions installed yet.</p>
<p>I’m greeted with this:</p>
<p><img src="https://boneskull.com/content/images/2018/03/vscode-initial.png" alt="VSCode for WebStorm Users"><br>
<small>VSCode’s “Welcome Page”</small></p>
<p>It has also opened a web page in Chrome:</p>
<p><img src="https://boneskull.com/content/images/2018/03/vscode-web.png" alt="VSCode for WebStorm Users"><br>
<small>Online tutorials &amp; such for VSCode</small></p>
<p>I ignore the web page (thanks, but no thanks) and click a few of the “Install support for…” links under <strong>Tools and languages</strong> to get some basic extensions installed.  I want to avoid customizing too much at the outset.  Since C/C++ isn’t listed on the “Welcome Page”, I dig into to the extensions and … can’t help myself from installing a bunch of extensions.  Oops.</p>
<p>I am, however, elated to report that the <a href="https://marketplace.visualstudio.com/items?itemName=isudox.vscode-jetbrains-keybindings">JetBrains IDE Keymap</a> is a thing, and it works great.</p>
<p>I go ahead and open up my <a href="https://github.com/mochajs/mocha">Mocha</a> working copy…</p>
<h3 id="summary">Summary</h3>
<ul>
<li><strong>Install “Extension Packs” to get started quickly.</strong>  You will install many extensions.</li>
<li>There’s <strong>plenty of tutorials</strong>.</li>
<li>If you're using the default bindings in WebStorm, you likely want to <strong>install the <a href="https://marketplace.visualstudio.com/items?itemName=isudox.vscode-jetbrains-keybindings">JetBrains IDE Keymap</a></strong> extension.</li>
<li>Key bindings are at once more powerful and complex than WebStorm.  It’s <strong>difficult to discover what a particular keystroke does at any given time</strong>, but also <strong>supports conditionals, for a nearly absurd level of control</strong>.</li>
</ul>
<h2 id="closeencounterswithversioncontrol">Close Encounters with Version Control</h2>
<p>OK, so I want to edit Mocha’s <code>CHANGELOG.md</code>.  But I know <code>origin</code> has changes I need to pull.</p>
<p>Happily, VSCode understands this is, in fact, a working copy.  To pull, I found a little “refresh” button in my status bar, and clicked it.  I <em>think</em> it worked?  It said &quot;sync.&quot;  What's a &quot;sync&quot;?</p>
<p>I’m unsure what just happened.  Which changesets did Git pull?  I want to look at the history.  After searching in vain, I realize that there is <em>no built-in support for Git history</em>, and I’m going to need to grab an extension for this.</p>
<p><a href="https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens">GitLens</a> seems to solve this problem (and others).  But there are <em>many</em> Git-related extensions for VSCode.  This is a drawback of the “small core” philosophy of VSCode (this reminds me of the Node.js ecosystem).  To VSCode’s credit, it aids discovery with tags, filters and sorting.</p>
<p>GitLens does some oddball things like “inline blame” and “code lens” (which is another view into “blame”?  I don’t get it).  I want to turn this noisy stuff off.</p>
<p>Hint: run <code>GitLens: Toggle Code Lens</code> and <code>GitLens: Toggle Line Blame Annotations</code> from the Command Palette.</p>
<p>GitLens then provides Git history in the left sidebar.  The presentation of the repo is a little disorienting (there are so many trees, it's like a forest), but I <em>do</em> see the pulled changesets.</p>
<p>Whew.</p>
<hr>
<p>I’ve made my changes to <code>CHANGELOG.md</code>, and it’s time to commit.  VSCode helpfully marks the file with a big <code>M</code> in the file list.  <em>+1</em>.</p>
<p>I find <code>Git: Commit</code> via the Command Palette, and realize I could have used my trusty <code>⌘-K</code>.  But I’m prompted that the stage is empty.</p>
<p>If you <em>only</em> use WebStorm’s built-in version control client (I don't), this will be culture shock.  VSCode uses the stage, like literally <em>every other</em> Git client except WebStorm’s.</p>
<p>I go ahead and commit everything (including unstaged changes; this would be <code>Git: Commit All</code> if I wanted to avoid the prompt), and push.</p>
<p>Not the nicest initial experience, but I’m confident it’ll be smoother sailing from here.</p>
<p>Next, I’ll run Mocha’s test suites to prepare for publishing.</p>
<h3 id="summary">Summary</h3>
<ul>
<li>You will likely want to <strong>install the Git extension pack</strong>, due to VSCode’s basic client implementation.</li>
<li>VSCode <strong>uses the stage</strong>, unlike WebStorm.</li>
<li>VSCode <strong>automatically enables Git support for working copies</strong> instead of <em>prompting</em> you into oblivion.</li>
</ul>
<h2 id="tasksinvscode">Tasks in VSCode</h2>
<p>I think a “Task” is perhaps a “Run Configuration” or “External Tool”.</p>
<p>“Tasks” are <em>not</em> the same as “Tasks” in a WebStorm, which is WebStorm’s (leaky) abstraction around issues, staged changes and branching.</p>
<p>Let’s see what “Configure Tasks” does…</p>
<p><img src="https://boneskull.com/content/images/2018/03/313910D3-2F0E-43A0-B382-9AAEA160B5C0.png" alt="VSCode for WebStorm Users"><small>What's do you call a widget like this, anyway?</small></p>
<p>Ooook. that needs some further explanation, but sure.  Is it trying to automatically detect my npm scripts?  For reference, Mocha’s <code>scripts</code> in its <code>package.json</code> is literally just:</p>
<pre><code class="language-json">{
  &quot;scripts&quot;: {
    &quot;prepublishOnly&quot;: &quot;nps test clean build&quot;,
    &quot;start&quot;: &quot;nps&quot;,
    &quot;test&quot;: &quot;nps test&quot;
  }
}
</code></pre>
<p>I’ll roll the dice with <code>npm: test</code>.</p>
<p>VSCode creates and opens a <code>.vscode/tasks.json</code> file:</p>
<p><img src="https://boneskull.com/content/images/2018/03/B54292E9-1569-4E60-B676-28713CF7370F-1.png" alt="VSCode for WebStorm Users"><small>At least it's not XML, amirite?</small></p>
<p><em>Fascinating</em>.  I click the link and <a href="https://code.visualstudio.com/docs/editor/tasks#vscode">learn about this file</a>.  It’s unclear whether VSCode intends for the user to commit <code>.vscode/</code> to VCS (I don’t do so; in fact, I add <code>.vscode/</code> to my <code>.gitignore</code> post-haste).</p>
<p>VSCode likely didn’t need me to “configure” the Task—it discovered the script itself.  I execute <code>Run Task…</code>, choose <code>npm: test</code>, and the output opens in a terminal, as you’d expect.</p>
<p>I’m now certain “Tasks” are analogous to “External Tools”.  Like in WebStorm, the user (seemingly) <em>cannot debug</em> a Task, and there’s little integration.  VSCode ships with some helpers for common build tools (unfortunately, <a href="https://npm.im/nps">nps</a> is not one of them).  Like WebStorm, the user has free rein to create a “shell”-based Task.</p>
<p>I’m still looking for the analog of a “Run Configuration,” which appears to be a “Debug Configuration,” though it’s just called “Configuration” under VSCode’s “Debug” menu.  Next, I’ll take it for a test-drive.  Whatever the hell it’s called.</p>
<h3 id="summary">Summary</h3>
<ul>
<li><strong>“Tasks” in VSCode are analogous to WebStorm’s “External Tools”</strong>.</li>
<li><strong>Configure tasks via JSON files</strong>.  This isn’t too awesome, but VSCode provides validation/completion when editing, which is better than nothing.</li>
<li>You must <strong>choose whether or not to commit <code>.vscode/</code> to VCS</strong>.  I’ve <em>never</em> had success committing <em>any</em> sliver of <code>.idea/</code> to VCS, but maybe the story is different here.  Don’t look at me; find some other guinea pig.</li>
</ul>
<blockquote>
<p><em>Trivia:</em> VSCode recognizes the filetype of <code>tasks.json</code> as “JSON with Comments”, which, AFAIK, is unadulterated, imaginary nonsense.  Is it <a href="http://json5.org">JSON5</a> or not?</p>
</blockquote>
<h2 id="debugging">Debugging</h2>
<p><img src="https://images.unsplash.com/photo-1508896694512-1eade558679c?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=db6ea36be475ddefc080f9c5f422d903" alt="VSCode for WebStorm Users"><br>
<small>Photo by <a href="https://unsplash.com/@minkmingle?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Mink Mingle</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></small></p>
<p>First, I install the <a href="https://marketplace.visualstudio.com/items?itemName=waderyan.nodejs-extension-pack">Node.js Extension Pack</a>, since I figure I’ll need it to debug properly.</p>
<p>From VSCode’s menu, I open <code>Debug &gt; Open Configurations</code>.  I’m presented with a new file, <code>.vscode/launch.json</code>.  Like <code>tasks.json</code>, this is my config file.</p>
<blockquote>
<p>This menu item inexplicably corresponds to the command <code>Debug: Open launch.json</code>.</p>
</blockquote>
<h3 id="innodejs">In Node.js</h3>
<p>Do you want the good news or bad news first?  I don't care.</p>
<p>The <em>bad news</em> is, I can’t just throw <code>npm test</code> in <code>launch.json</code> and expect my breakpoints to get hit.  Why not?  Because <code>npm</code> spawns <code>nps</code>, which spawns ten different <code>mocha</code> processes in series, each which spawn <code>_mocha</code>, and many of which spawn <code>_mocha</code> <em>again</em>.</p>
<p>This is by no means VSCode's fault, but the story around debugging subprocesses in Node.js is a sad one.  I'd be foolish to expect a miracle.</p>
<p>The <em>good news</em> is that if I choose some subset of the tests to run with <code>bin/_mocha</code>, debugging works well (unless I want to debug more child processes).  It’s a solid debugging experience, though lacking some of the bells &amp; whistles of WebStorm’s debugger.</p>
<h3 id="inabrowser">In a Browser</h3>
<p>Debugging tests with <a href="http://karma-runner.github.io/2.0/index.html">Karma</a>, awkward is.</p>
<p>In a WebStorm, you create Karma-based run configuration, point it to your config file, specify any particular browsers or other extra options, and push the button.  It works well, even if you happen to be bundling your code with <a href="https://www.npmjs.com/package/karma-browserify">karma-browserify</a>, which Mocha is.</p>
<p>This is the VSCode experience:</p>
<ol>
<li>You need to install the <a href="https://github.com/Microsoft/vscode-chrome-debug">Chrome Debugger</a> extension.</li>
<li>Create a <code>chrome</code> debug configuration identical to:</li>
</ol>
<pre><code class="language-json">{
  &quot;type&quot;: &quot;chrome&quot;,
  &quot;request&quot;: &quot;attach&quot;,
  &quot;name&quot;: &quot;Attach to Karma&quot;,
  &quot;address&quot;: &quot;localhost&quot;,
  &quot;port&quot;: 9333,
  &quot;pathMapping&quot;: {
    &quot;/&quot;: &quot;${workspaceRoot}/&quot;,
    &quot;/base/&quot;: &quot;${workspaceRoot}/&quot;
  }
}
</code></pre>
<ol start="3">
<li>Modify your <code>karma.conf.js</code> (<em>ugh</em>, really?) to add a custom launcher to your setup object.  The port below must be the same port as above.  Hope it’s not in use!</li>
</ol>
<pre><code class="language-js">{
  customLaunchers: {
    ChromeDebug: {
     base: 'Chrome',
      flags: ['--remote-debugging-port=9333']
    }
  }
}
</code></pre>
<ol start="4">
<li>Start Karma (you can create a Task to do this): <code>karma start —-browsers ChromeDebug --auto-watch --no-single-run</code>.  Leave it running.</li>
<li>Run your “Attach to Karma” debug configuration; choose it from “Debug” &gt; “Run Configuration…”.</li>
</ol>
<p>At this point, you can set breakpoint(s) by clicking in the editor’s gutter, though they will not immediately be enabled.</p>
<p>There appears a small, odd, quasi-movable toolbar near the top of the window. Within this impish toolbar is a “refresh”-looking button; click it to re-run the tests.  VSCode will then be able to discover which scripts/files Karma loaded. If you’re lucky, it’ll even hit your breakpoint!</p>
<p>The above took a solid hour to figure out, even with a few scattered examples out there.</p>
<p>The last thing I want to evaluate is VSCode’s “IntelliSense” capabilities.</p>
<h3 id="summary">Summary</h3>
<ul>
<li><strong>“Run Configurations” = “Debug Configurations”</strong>. I’m calling them “Debug Configurations”, so there.</li>
<li><strong>Configure “Debug Configurations” via JSON</strong>, like “Tasks”.</li>
<li>VSCode has <strong>very basic breakpoints</strong> without support for features such as enabling/disabling based on previous breakpoints, or disabling once hit.</li>
<li><strong>Debugging in Karma is a poor experience</strong>.  I couldn’t find a Karma-specific extension to help with this.</li>
</ul>
<h2 id="codecompletioninspectionsintentionsohmy">Code Completion, Inspections, &amp; Intentions (Oh My)</h2>
<p>Assuming you use ESLint, install the <a href="https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint">ESLint extension</a>.</p>
<p>In terms of inspections, I just want ESLint to run on my JavaScript.  I don’t need any other inspections, so I disable all the random crap WebStorm ships with.</p>
<p>The ESLint extension “just works,” and the user sees the same type of inline inspections as from ESLint within WebStorm, right down to the intentions, like “Fix file with ESLint”.</p>
<p>The Node.js Extension Pack provides some npm-related &quot;IntelliSense,&quot; which knows important stuff about <code>package.json</code>, like if a package is missing or extraneous, will tell you so, and automatically fix it for you via an intention.</p>
<blockquote>
<p>Is “IntelliSense” a trademark or something?  It’s a Microsoft-ism, right?  I am pretty sure I hate this word.</p>
</blockquote>
<p>I believe VSCode uses TypeScript definitions under the hood (not just in TypeScript files) to inspect code, at least in part.  This results in <em>extremely accurate and nearly instantaneous</em> code completion, jump-to-declaration, and the like.  If there’s any “killer feature” for JavaScript developers, it’s this.  You can coax such behavior out of WebStorm, but it will still be much more sluggish.</p>
<p>A “deep dive” into this would be an excellent source of information for WebStorm users (think: CSS, HTML, TypeScript, template languages, etc.), but is unfortunately out-of-scope.  I’ll end with my final thoughts below.</p>
<h3 id="summary">Summary</h3>
<ul>
<li>VSCode (and/or its extensions) provide <strong>accurate, zippy code-completion and inline docs for JavaScript</strong>.</li>
<li>VSCode <strong>does not ship with a bunch of built-in inspections.</strong>  If you use them, you’ll miss them, unless you find alternatives.</li>
<li>The <strong>ESLint experience is strong</strong>.</li>
<li>The <strong>npm-related experience is excellent</strong>.</li>
</ul>
<h2 id="vscodetheverdict">VSCode: The Verdict</h2>
<p><img src="https://images.unsplash.com/photo-1505739818593-e7506ebf74c0?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=3d7d9b0bfaf9a9e809cae01625c1582f" alt="VSCode for WebStorm Users"><br>
<small>Photo by <a href="https://unsplash.com/@chuttersnap?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">chuttersnap</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></small></p>
<p>Visual Studio Code is <strong>better than I expected</strong>.  It’s comparable with WebStorm—by way of extensions—in terms of feature set.  It has a couple notable advantages over WebStorm, however:</p>
<ul>
<li>VSCode is generally faster and more responsive than WebStorm</li>
<li>VSCode’s JavaScript inspection/completion experience is just plain better</li>
<li>Free as in beer</li>
<li>Much exciting!</li>
</ul>
<p>Notable disadvantages include:</p>
<ul>
<li>Poor browser-based debugging experience (for me, anyway)</li>
<li>Piecemeal (though complete) Git support</li>
<li>JSON-based configuration</li>
<li>No customer support</li>
</ul>
<p>As a developer, my code editor my most important tool.  Having spent many years with JetBrains and WebStorm—and having very little dissatisfaction—a different tool must be <em>incredibly</em> compelling for me to want to pick up.</p>
<p>My advice for WebStorm users?  <em>Don’t try VSCode unless you are prepared to switch.</em></p>
]]></content:encoded></item><item><title><![CDATA[Get on the Good Foot with MicroPython on the ESP32, Part 2 of 2]]></title><description><![CDATA[In this tutorial, you'll learn how to send ambient temperature data over MQTT using MicroPython on an ESP32, & how to do the same with Watson IoT Platform.]]></description><link>https://boneskull.com/micropython-on-esp32-part-2/</link><guid isPermaLink="false">5a690ab7233e79218a31df23</guid><category><![CDATA[micropython]]></category><category><![CDATA[python]]></category><category><![CDATA[mqtt]]></category><category><![CDATA[tutorial]]></category><category><![CDATA[watson]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Thu, 25 Jan 2018 15:00:00 GMT</pubDate><media:content url="https://boneskull.com/content/images/2018/01/2714752119_5ec4e83d09_b.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://boneskull.com/content/images/2018/01/2714752119_5ec4e83d09_b.jpg" alt="Get on the Good Foot with MicroPython on the ESP32, Part 2 of 2"><p><a href="https://boneskull.com/micropython-on-esp32-part-1/">In the first part</a> of this excruciating tutorial, I taught the reader how to begin with MicroPython on an ESP32-based development board.  We:</p>
<ol>
<li>Flashed the board</li>
<li>Frolicked in the REPL</li>
<li>Configured WiFi</li>
<li>Uploaded scripts</li>
<li>Build a circuit with a DS18B20 1-Wire temperature sensor</li>
<li>Used MicroPython to read the temperature</li>
</ol>
<p>In <em>this</em> part of the tutorial, we’ll take the data we gather with the sensor and publish it over MQTT.</p>
<p>If you’re unfamiliar with the concept, I’ll try to explain MQTT in a nutshell.</p>
<h2 id="mqttinanutshell">MQTT in a Nutshell</h2>
<p><a href="https://en.wikipedia.org/wiki/MQTT">MQTT</a> is a machine-to-machine <em>protocol</em> for publishing and subscribing to messages.  Importantly, MQTT imposes no constraints upon the <em>content</em> nor <em>structure</em> of those messages.</p>
<p><em>In a typical setup</em>, you have a single MQTT <em>broker</em> and one-or-many MQTT <em>clients</em>.  A client may publish messages, subscribe to messages, or both.  A client needn’t be an IoT device, a web app, a desktop or mobile app, a microservice, or anything in particular, as long as it speaks MQTT.</p>
<p>All clients connect to the broker. The broker is responsible for receiving published messages and (possibly) delivering them to interested clients.</p>
<p>Each message has a “topic”.  As is vital to the publish/subscribe pattern, a message’s publisher <em>doesn’t necessarily care</em> if anyone is listening.  Interested clients will <em>subscribe</em> to this topic.</p>
<h3 id="amqttexample">A MQTT Example</h3>
<p>You have an MQTT client—perhaps a device with a temperature sensor—called <code>bob</code> which wants to publish temperature data.  It may publish on a topic such as <code>bob/sensor/temperature</code>, and the message would be the data, e.g., <code>68.75</code>.</p>
<p>Another MQTT client, <code>ray</code>, may want to listen for temperature data so we can display it as a time series graph on a dashboard; <code>ray</code> would tell the broker it wishes to subscribe to the <code>bob/sensor/temperature</code> topic.  Finally, when <code>bob</code> publishes on this topic, the broker notifies <code>ray</code>, and <code>ray</code> receives the message.  <code>ray</code> can then do whatever it needs with its data.</p>
<h3 id="wildcards">Wildcards</h3>
<p>Subscriptions support <em>wildcards</em>.  If client <code>bob</code> had another sensor which reports the relative humidity, it may publish this data under the topic <code>bob/sensor/humidity</code>.  Client <code>ray</code> could use a <em>single-level wildcard</em> such as <code>bob/sensor/+</code>, which would receive messages published on <code>bob/sensor/humidity</code> <em>and</em> <code>bob/sensor/temperature</code>.  Or perhaps a <em>multi-level wildcard</em> such as <code>bob/#</code>, which would subscribe to <em>any</em> topic beginning with <code>bob/</code>.</p>
<blockquote>
<p>A “topic per client” is merely a convention for sake of our example.  MQTT enforces no such constraint.</p>
</blockquote>
<p>There’s certainly <em>more to it</em> than just the above—but that’s the nutshell, and I’m calling it good.</p>
<p><img src="https://images.unsplash.com/photo-1507666405895-422eee7d517f?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=a2e2e7cf8d2b94438a5fd9efb45ce73c" alt="Get on the Good Foot with MicroPython on the ESP32, Part 2 of 2"><br>
<small>Photo by <a href="https://unsplash.com/@cmart10?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Caleb Martin</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></small></p>
<h2 id="whymqtt">Why MQTT?</h2>
<p>It’s just as important to understand <em>why</em> you’d want to use a technology over another (or none at all).</p>
<p>MQTT's designers had resource-constrained devices (such as sensors) in mind; it’s a “thin” protocol, and easier to implement compared to, say, HTTP.  As such, you’ll find that MQTT is a core technology behind many cloud-based “IoT platforms”, including the offerings of <a href="https://internetofthings.ibmcloud.com/#/">IBM</a>, <a href="https://aws.amazon.com/iot/">Amazon</a>, <a href="https://azure.microsoft.com/en-us/services/iot-hub/">Microsoft</a>, <a href="https://io.adafruit.com/">Adafruit</a>, and many others.</p>
<p>You <em>can</em> directly access many of these services via <a href="https://en.wikipedia.org/wiki/Representational_state_transfer">RESTful</a> APIs, but it will necessarily consume more of your devices’ resources to do so.</p>
<blockquote>
<p>Using HTTP(S) instead of MQTT makes sense if you need to make a <a href="https://en.wikipedia.org/wiki/Remote_procedure_call">remote procedure call</a>, or if a <a href="https://en.wikipedia.org/wiki/Request%E2%80%93response">request/response</a> model is more natural than MQTT's publish/subscribe model in your problem domain.  Even then, protocols such as <a href="https://en.wikipedia.org/wiki/Constrained_Application_Protocol">CoAP</a> will demand fewer resources.</p>
</blockquote>
<p>Now that we understand what MQTT is all (or more accurately, “partly”) about, let’s use it to spread the word about our ambient temperatures.</p>
<h2 id="bootscriptandtemperaturemodule">Boot Script and Temperature Module</h2>
<p>We’ll use the code from the last tutorial to begin with.  For reference, I’ll show them below.</p>
<p>You should have two (2) files, the first being our startup script, <code>boot.py</code>:</p>
<pre><code class="language-python">def connect():
    import network
    sta_if = network.WLAN(network.STA_IF)
    if not sta_if.isconnected():
        print('connecting to network...')
        sta_if.active(True)
        sta_if.connect('&lt;YOUR SSID&gt;', '&lt;YOUR PASSWORD&gt;')
        while not sta_if.isconnected():
            pass
    print('network config:', sta_if.ifconfig())

def no_debug():
    import esp
    # you can run this from the REPL as well
    esp.osdebug(None)

no_debug()
connect()
</code></pre>
<p>And the second is <code>temperature.py</code>, an abstraction around the temperature sensor:</p>
<pre><code class="language-python">import time
from machine import Pin
from onewire import OneWire
from ds18x20 import DS18X20


class TemperatureSensor:
    &quot;&quot;&quot;
    Represents a Temperature sensor
    &quot;&quot;&quot;
    def __init__(self, pin):
        &quot;&quot;&quot;
        Finds address of single DS18B20 on bus specified by `pin`
        :param pin: 1-Wire bus pin
        :type pin: int
        &quot;&quot;&quot;
        self.ds = DS18X20(OneWire(Pin(pin)))
        addrs = self.ds.scan()
        if not addrs:
            raise Exception('no DS18B20 found at bus on pin %d' % pin)
        # save what should be the only address found
        self.addr = addrs.pop()

    def read_temp(self, fahrenheit=True):
        &quot;&quot;&quot;
        Reads temperature from a single DS18X20
        :param fahrenheit: Whether or not to return value in Fahrenheit
        :type fahrenheit: bool
        :return: Temperature
        :rtype: float
        &quot;&quot;&quot;

        self.ds.convert_temp()
        time.sleep_ms(750)
        temp = self.ds.read_temp(self.addr)
        if fahrenheit:
            return self.c_to_f(temp)
        return temp

    @staticmethod
    def c_to_f(c):
        &quot;&quot;&quot;
        Converts Celsius to Fahrenheit
        :param c: Temperature in Celsius
        :type c: float
        :return: Temperature in Fahrenheit
        :rtype: float
        &quot;&quot;&quot;
        return (c * 1.8) + 32
</code></pre>
<p>Upload both of these files via <code>ampy</code>:</p>
<pre><code class="language-bash">$ ampy --port /dev/tty.SLAB_USBtoUART put boot.py &amp;&amp; \
  ampy --port /dev/tty.SLAB_USBtoUART put temperature.py
</code></pre>
<p>(Replace <code>/dev/tty.SLAB_USBtoUART</code> with your device path or COM port.)</p>
<p>In the first part of this tutorial, I told you to download (or clone)  the <a href="https://github.com/micropython/micropython-lib">micropython-lib</a> project.  This is not necessary!  Read on.</p>
<h2 id="installthemqttmodulesviaupip">Install the MQTT Modules via <code>upip</code></h2>
<p>Since your device should be online, we can use <code>upip</code> from the REPL.  <code>upip</code> is a stripped-down package manager for MicroPython.  It's built-in to the ESP32 port of MicroPython; you already have it. It downloads packages from <a href="https://pypi.org">PyPi</a>, just like <code>pip</code>.</p>
<p>Open your REPL, and execute:</p>
<pre><code class="language-python">import upip
upip.install('micropython-umqtt.robust')
</code></pre>
<p>Sample output:</p>
<pre><code class="language-plain">Installing to: /lib/
Warning: pypi.python.org SSL certificate is not validated
Installing micropython-umqtt.robust 1.0 from https://pypi.python.org/packages/31/02/7268a19a5054cff8ff4cbbb126f00f098848dbe8f402caf083295a3a6a11/micropython-umqtt.robust-1.0.tar.gz
</code></pre>
<blockquote>
<p>Take note: if your device isn’t online, <code>upip</code> won’t work from the device’s REPL.</p>
</blockquote>
<p>You also need to grab its dependency, <code>micropython-umqtt.simple</code>:</p>
<pre><code class="language-python">upip.install('micropython-umqtt.robust')
</code></pre>
<p>Sample output:</p>
<pre><code class="language-plain">Installing to: /lib/
Installing micropython-umqtt.simple 1.3.4 from https://pypi.python.org/packages/bd/cf/697e3418b2f44222b3e848078b1e33ee76aedca9b6c2430ca1b1aec1ce1d/micropython-umqtt.simple-1.3.4.tar.gz
</code></pre>
<blockquote>
<p><code>umqtt.simple</code> is a barebones MQTT client.  <code>umqtt.robust</code> depends on <code>umqtt.simple</code>; it’s an MQTT client which will automatically reconnect to the broker if a disconnection occurs.</p>
</blockquote>
<p>To verify that this installed properly, you can execute from your REPL:</p>
<pre><code class="language-python">from umqtt.robust import MQTTClient
</code></pre>
<p>No errors?  You’re good.</p>
<h3 id="getamqttclientapp">Get a MQTT Client App</h3>
<p>Before we begin the next section, you might want another application handy—a standalone MQTT client.  You could try:</p>
<ul>
<li><a href="http://mqttfx.org">MQTT.fx</a> (GUI; Windows/Mac)</li>
<li><a href="http://workswithweb.com/mqttbox.html">MQTTBox</a> (GUI; Windows/Mac/Linux)</li>
<li><code>mosquitto-clients</code> from <a href="https://mosquitto.org/">Mosquitto</a> is available via package manager (CLI; Linux/Mac)</li>
<li>Various free clients on app stores (iOS/Android)</li>
<li><a href="https://nodered.org">Node-RED</a> can also connect to an MQTT broker (Web; Windows/Mac/Linux)</li>
</ul>
<p>Using one isn’t <em>strictly</em> necessary, but will aid experimentation.</p>
<h2 id="experimentingwithumqttintherepl">Experimenting with <code>umqtt</code> in the REPL</h2>
<p>If you’ve been reading closely, you’ll understand that we need an MQTT <em>broker</em> (“server”); a MQTT client with no broker is useless.</p>
<p>It just so happens that <em>public</em> MQTT brokers exist; <a href="http://test.mosquitto.org"><code>test.mosquitto.org</code></a> by the <a href="https://mosquitto.org">Mosquitto</a> project is one such broker. As a member of the public, you can use it!  Just be aware: <strong>any data or information you publish on a public MQTT broker is <em>also</em> public</strong>.  Don’t publish anything you wouldn’t want <em>everyone</em> to know about.</p>
<p>We’ll use this public broker for the purposes of the tutorial, but if you have a different one you wish to use, <em>you go ahead and do that.</em></p>
<p>Now, let’s try to use our MQTT lib to publish a message on the broker.</p>
<h3 id="createauniqueclientid">Create a Unique “Client ID”</h3>
<p>One caveat to note about MQTT: each MQTT client connected to a broker must have a unique identifier: a <em>client ID</em>.  You’ll need to pick a  phrase or generate something.  I’ll just generate one on the command line:</p>
<pre><code class="language-bash">$ python3 -c 'from uuid import uuid4; print(uuid4())'
52dc166c-2de7-43c1-88ff-f80211c7a8f6
</code></pre>
<p>Copy the resulting value to your clipboard; you’ll need it in a minute.</p>
<h3 id="connecttotherepl">Connect to the REPL</h3>
<p>Open up a serial connection to your ESP32.  I’m going to use <code>miniterm</code> here, which Python 3 bundles:</p>
<pre><code class="language-bash">$ python3 -m serial.tools.miniterm --raw /dev/tty.SLAB_USBtoUART 115200
</code></pre>
<p>The <code>--raw</code> flag avoids problems with special characters such as <code>BS</code> and <code>DEL</code>.</p>
<h3 id="connecttothebroker">Connect to the Broker</h3>
<blockquote>
<p>As in the first tutorial, I’ll omit the prompt (<code>&gt;&gt;&gt;</code>) when working with the REPL.</p>
</blockquote>
<p>We should now be able to import <code>MQTTClient</code>:</p>
<pre><code class="language-python">from umqtt.simple import MQTTClient
</code></pre>
<p>The <code>MQTTClient</code> constructor accepts a client ID and a DNS or IP address of a MQTT broker.  We’ll use our pseudorandom client ID from above, and <code>test.mosquitto.org</code> for the server, then call <code>connect()</code>:</p>
<pre><code class="language-python">client = MQTTClient('52dc166c-2de7-43c1-88ff-f80211c7a8f6', 
		'test.mosquitto.org')
client.connect()
</code></pre>
<p>The output of this command, if all went well, should be <code>0</code>; <code>connect()</code> will raise an exception it the connection failed.</p>
<h3 id="connectasecondclient">Connect a Second Client</h3>
<p>At this point, I’m going to fire up <a href="http://mqttfx.org">MQTT.fx</a>; I’ll use it to subscribe to the messages which the ESP32 publishes.</p>
<p>I enter server <code>test.mosquitto.org</code> in the server input field, and leave the port field <code>1883</code>, which is the default (insecure) MQTT port.  I then click “Connect,” and wait for negotiation.  Here’s a screenshot of my connected client:</p>
<p><img src="https://boneskull.com/content/images/2018/01/mqttfx-connection.png" alt="Get on the Good Foot with MicroPython on the ESP32, Part 2 of 2"><small>MQTT.fx connected to <code>test.mosquitto.org</code>.</small></p>
<p>I’ll come back to MQTT.fx after we learn to publish from the REPL.</p>
<h3 id="publishanmqttmessage">Publish an MQTT Message</h3>
<p>Assuming the ESP32 is now connected to the broker, you can publish messages.  First, I’ll emit a temperature in Fahrenheit, with the topic <code>boneskull/test/temperature/fahrenheit</code>:</p>
<pre><code class="language-python">client.publish('boneskull/test/temperature/fahrenheit', 72)
</code></pre>
<p>…but MicroPython complained:</p>
<pre><code class="language-plain">Traceback (most recent call last):
  File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt;
  File &quot;umqtt/simple.py&quot;, line 112, in publish
TypeError: object of type 'int' has no len()
</code></pre>
<p>What’s the problem here?  Let me explain:</p>
<ol>
<li>An MQTT message payload could be <em>literally any data.</em>  MQTT has no notion of “data types”.  It doesn’t know what a “number” or “integer” is.  Your payload will always consist of <em>raw bytes</em>.</li>
<li>There’s no direct mapping of an integer to “bytes,” as there isn’t <em>just one way</em> to encode this number as binary data.  We don’t know if this is a <em>signed</em> or <em>unsigned</em> integer, how many bits we should use, etc.</li>
<li>The problem could have been obvious (and we could have RTFM), but MicroPython shies away from overly “friendly” APIs due to resource constraints, so it’s not obvious what’s happening here.</li>
</ol>
<p>The easiest solution?  Publish a <code>str</code> instead:</p>
<pre><code class="language-python">client.publish('boneskull/test/temperature/fahrenheit', '72')
</code></pre>
<p>If this worked, there should be no output from the statement.</p>
<p>Hooray?  I’m not convinced—are you?  This just squirted the temperature into the ether!  We should <em>see</em> where these messages are going.  I can do that in my MQTT.fx client by <em>subscribing</em> to the topic.  This is how:</p>
<p><img src="https://boneskull.com/content/images/2018/01/mqttfx-subscribe.png" alt="Get on the Good Foot with MicroPython on the ESP32, Part 2 of 2"><small>Subscribing to a topic in MQTT.fx</small></p>
<ol>
<li>Click on the “Subscribe” tab</li>
<li>Enter <code>boneskull/test/temperature/fahrenheit</code> in the input field</li>
<li>Click “Subscribe” button to the right of input field</li>
</ol>
<p>After you’ve done this, MQTT.fx will contact the broker, and if successful, you will see the subscription appear beneath the input field:</p>
<p><img src="https://boneskull.com/content/images/2018/01/mqttfx-subscribed.png" alt="Get on the Good Foot with MicroPython on the ESP32, Part 2 of 2"><small>An active subscription in MQTT.fx</small></p>
<p>Next time we (or any client attached to the broker) publishes on this topic, we will see it in the lower-right area of this window, where it is grey and empty.</p>
<p>Return to your serial terminal, and run the last command again (you can just hit “up-arrow” then “enter”):</p>
<pre><code class="language-python">client.publish('boneskull/test/temperature/fahrenheit', '72')
</code></pre>
<p>Switch back to MQTT.fx.  It may take a few seconds depending on how busy the broker is, but the message should now appear to the right, along with its payload:</p>
<p><img src="https://boneskull.com/content/images/2018/01/mqttfx-received.png" alt="Get on the Good Foot with MicroPython on the ESP32, Part 2 of 2"><small>A received message in MQTT.fx</small></p>
<p>Excellent work!</p>
<p>Now we can use everything we’ve learned, and periodically publish <em>real</em> temperature data.  Let’s cook up a little module to do that.</p>
<h2 id="amoduletopublishtemperature">A Module to Publish Temperature</h2>
<p>I’ve written up a little module which uses <code>MQTTClient</code> and <code>TemperatureSensor</code> (from our first tutorial) to publish temperature data.  Create <code>temperature_client.py</code>:</p>
<pre><code class="language-python">import time

from umqtt.robust import MQTTClient

from temperature import TemperatureSensor


class TemperatureClient:
    &quot;&quot;&quot;
    Represents an MQTT client which publishes temperature data on an interval
    &quot;&quot;&quot;

    def __init__(self, client_id, server, pin, fahrenheit=True, topic=None,
                 **kwargs):
        &quot;&quot;&quot;
        Instantiates a TemperatureSensor and MQTTClient; connects to the
        MQTT broker.
        Arguments `server` and `client_id` are required.

        :param client_id: Unique MQTT client ID
        :type client_id: str
        :param server: MQTT broker domain name / IP
        :type server: str
        :param pin: 1-Wire bus pin
        :type pin: int
        :param fahrenheit: Whether or not to publish temperature in Fahrenheit
        :type fahrenheit: bool
        :param topic: Topic to publish temperature on
        :type topic: str
        :param kwargs: Arguments for MQTTClient constructor
        &quot;&quot;&quot;
        self.sensor = TemperatureSensor(pin)
        self.client = MQTTClient(client_id, server, **kwargs)
        if not topic:
            self.topic = 'devices/%s/temperature/degrees' % \
                         self.client.client_id
        else:
            self.topic = topic
        self.fahrenheit = bool(fahrenheit)

        self.client.connect()

    def publishTemperature(self):
        &quot;&quot;&quot;
        Reads the current temperature and publishes it on the configured topic.
        &quot;&quot;&quot;
        t = self.sensor.read_temp(self.fahrenheit)
        self.client.publish(self.topic, str(t))

    def start(self, interval=60):
        &quot;&quot;&quot;
        Begins to publish temperature data on an interval (in seconds).
        This function will not exit! Consider using deep sleep instead.
        :param interval: How often to publish temperature data (60s default)
        :type interval: int
        &quot;&quot;&quot;
        while True:
            self.publishTemperature()
            time.sleep(interval)

</code></pre>
<p>Upload this to your board:</p>
<pre><code class="language-bash">$ ampy --port /dev/tty.SLAB_USBtoUART put temperature_client.py
</code></pre>
<p>Your standalone MQTT client app should still be online.  Let’s send a message in the REPL, then view the result in the standalone client  (please create your own client ID below):</p>
<pre><code class="language-python">from temperature_client import TemperatureClient
tc = TemperatureClient('boneskull-test-1516667340',
                       'test.mosquitto.org', 12, 
                       topic='boneskull/test/temperature')
tc.start(10) # publish temperature every 10s
</code></pre>
<p>A word of warning: once you execute the above, the REPL will “hang,” since the <code>start()</code> method is just <a href="https://en.wikipedia.org/wiki/Busy_waiting">busy-waiting</a>.</p>
<blockquote>
<p>Even though this is a busy-wait, <code>time.sleep()</code> does <em>not</em> mean that &quot;nothing happens&quot;; the tick rate in the <a href="https://github.com/espressif/esp-idf">underlying operating system</a> is 10ms; any sleep time (necessarily using <code>time.sleep_ms()</code> or <code>time.sleep_us()</code>) <em>equal to or less than</em> 10ms <em>will</em> preempt other tasks!</p>
</blockquote>
<p>Tab back to MQTT.fx:</p>
<p><img src="https://boneskull.com/content/images/2018/01/mqttfx-success.png" alt="Get on the Good Foot with MicroPython on the ESP32, Part 2 of 2"><small>Real temperature data in MQTT.fx!</small></p>
<p>This will loop indefinitely, so when ready, push the “reset” button on your dev board to get back to the REPL (you <em>don’t</em> need to quit your  serial terminal beforehand).</p>
<blockquote>
<p>Important to note: the “time and date” you see in the payload detail does <em>not</em> mean “when the originating client sent the message.” Rather, it means “when the receiving client received the message.”  MQTT messages do not contain a “sent on” timestamp unless you add one yourself!</p>
<p>(To do this, you'd need to ask an <a href="https://en.wikipedia.org/wiki/Network_Time_Protocol">NTP</a> server or an external <a href="https://en.wikipedia.org/wiki/Real-time_clock">RTC</a> module, which is beyond our scope.)</p>
</blockquote>
<p>We’re successfully published a number!  That is great news, except, that number could refer to anything.  It’d be helpful to include the unit—either Fahrenheit or Celsius—in the payload.  I’ll show you how.</p>
<h3 id="workingwithjson">Working with JSON</h3>
<p>As I’ve <em>beaten to death</em>, MQTT payloads contain anything.  That means if you want to send some structured data, <em>you</em> are responsible for serialization and deserialization.</p>
<p><a href="https://en.wikipedia.org/wiki/Json">JSON</a> is a common data interchange format for which MicroPython contains built-in support (unlike, say, that vile Arduino API).  It’s trivial to “stringify” a <code>dict</code> and publish the result.</p>
<p>To work with JSON—just like in Real Python—we will need to import another module in <code>temperature_client.py</code>:</p>
<pre><code class="language-python">import json
</code></pre>
<p>Then, add the data to the payload within the <code>publishTemperature</code> method:</p>
<pre><code class="language-python">    def publishTemperature(self):
        &quot;&quot;&quot;
        Reads the current temperature and publishes a JSON payload on the
        configured topic, e.g., `{&quot;unit&quot;: &quot;F&quot;, &quot;degrees&quot;: 72.5}`
        &quot;&quot;&quot;
        t = self.sensor.read_temp(self.fahrenheit)
        payload = dict(degrees=t)
        if self.fahrenheit:
            payload['unit'] = 'F'
        else:
            payload['unit'] = 'C'
        self.client.publish(self.topic, json.dumps(payload))
</code></pre>
<p>Notice that we didn’t need to coerce the temperature (“degrees”) into a <code>str</code> for purposes of publishing, because JSON is a <code>str</code> itself—the recipient of this payload will decode the JSON into a numeric value.</p>
<p>Disconnect from the REPL (that’s <code>Ctrl-]</code> if you happen to be using <code>miniterm</code>), and upload <code>temperate_client.py</code> to the ESP32 again, then reconnect to the REPL.  We don’t need to begin an infinite loop to test it, since we can just call <code>publishTemperature()</code> directly:</p>
<pre><code class="language-python">from temperature_client import TemperatureClient
tc = TemperatureClient('boneskull-test-1516667340',
                       'test.mosquitto.org', 12, 
                       topic='boneskull/test/temperature')
tc.publishTemperature()
</code></pre>
<p>The above will send a single message.  On the receiving end:</p>
<p><img src="https://boneskull.com/content/images/2018/01/mqttfx-json.png" alt="Get on the Good Foot with MicroPython on the ESP32, Part 2 of 2"><small>Pretty-printed JSON in MQTT.fx</small></p>
<p>If you resize your MQTT.fx window to be tall enough, you’ll see the “Payload decoded by” dropdown in the lower-right.  You can see the pretty-printed payload appears as we ‘spected.</p>
<blockquote>
<p>MQTT.fx also includes Base64 and hex decoders, but the default is “plain text”.</p>
</blockquote>
<p>I think you have the basics down.  But maybe you aren’t going to run your own private MQTT broker.  Let’s take this one step further and interface with an IoT platform.</p>
<h2 id="useanesp32withmicropythononibmcloud">Use an ESP32 with MicroPython on IBM Cloud</h2>
<p>Watson IoT Platform is a service in IBM Cloud (formerly Bluemix).  I’ve written a MicroPython module to interface with it, and we’ll use that to save some time.</p>
<h3 id="watsoniotplatformquickstart">Watson IoT Platform Quickstart</h3>
<p>You can experiment with this platform without needing to sign up for an account.</p>
<ol>
<li>Visit the <a href="https://quickstart.internetofthings.ibmcloud.com/#/">Quickstart</a> page:<br>
<img src="https://boneskull.com/content/images/2018/01/quickstart.png" alt="Get on the Good Foot with MicroPython on the ESP32, Part 2 of 2"><small>Watson IoT Platform's Quickstart Page</small></li>
<li>Tick “I Accept” after carefully reading the entire terms of use.</li>
<li>Enter a unique device identifier in the input box.  I’m calling mine “boneskull-esp32-test”. Click “Go”.</li>
</ol>
<p>Keep this browser window open; you’re now ready to send data, and see the result in real-time.  Let’s get to it.</p>
<h3 id="uploadthemicropythonwatsoniotmodule">Upload the <code>micropython-watson-iot</code> module</h3>
<p><a href="https://github.com/boneskull/micropython-watson-iot">micropython-watson-iot</a> is the module I referenced earlier.  Its README contains installation instructions using <code>upip</code>, but essentially it’s the same as before, via the REPL:</p>
<pre><code class="language-python">import upip
upip.install('micropython-watson-iot')
</code></pre>
<p>To verify installation, run:</p>
<pre><code class="language-python">from watson_iot import Device
</code></pre>
<p>Assuming that didn’t throw an exception, we can use it like so:</p>
<pre><code class="language-python">d = Device(device_id='boneskull-esp32-test')
d.connect()
d.publishEvent('temperature', {'degrees': 68.5, 'unit': 'F'})
</code></pre>
<p>You should see it reflected in your browser.  In fact, if you do something like this…</p>
<pre><code class="language-python">import time
d.publishEvent('temperature', {'degrees': 68.5, 'unit': 'F'})
time.sleep(5)
d.publishEvent('temperature', {'degrees': 69.5, 'unit': 'F'})
time.sleep(5)
d.publishEvent('temperature', {'degrees': 67.5, 'unit': 'F'})
time.sleep(5)
d.publishEvent('temperature', {'degrees': 66.5, 'unit': 'F'})
</code></pre>
<p>…you should see a nifty line graph:</p>
<p><img src="https://boneskull.com/content/images/2018/01/iot-graph.png" alt="Get on the Good Foot with MicroPython on the ESP32, Part 2 of 2"><small>Real-time graph of our temperature data</small></p>
<blockquote>
<p>You’re welcome to play with this in more depth; Watson IoT Platform has a free tier.  To sign up, you need to:</p>
<ol>
<li><a href="https://console.bluemix.net/registration">Register with IBM Cloud</a> (no credit card needed)</li>
<li><a href="https://console.bluemix.net/catalog/services/internet-of-things-platform">Create a Watson IoT Platform service instance</a> using the “free plan” from the catalog</li>
<li>Click “Launch” to explore the platform.</li>
<li>Also, check out <a href="https://console.bluemix.net/docs/services/IoT/index.html">the docs</a>.</li>
</ol>
</blockquote>
<p>The <code>micropython-watson-iot</code> library offers a few “quality of life” benefits—as IoT platforms typically do—when compared to a vanilla MQTT client and/or broker:</p>
<ol>
<li>Messages contain metadata such as “published on” time, handled by the cloud platform</li>
<li>You can group devices via logical “device types”</li>
<li>Structured data can be automatically encoded/decoded to/from JSON (it does this by default)</li>
<li>Create your own custom encoders and decoders (e.g., numeric, Base64)</li>
<li>Create custom “command handlers,” which cause the device to react upon reception of a “command”-style MQTT message.  For example, you could send a command to blink an onboard LED or reboot the device.</li>
</ol>
<blockquote>
<p>I’ve committed a few <a href="https://github.com/boneskull/micropython-watson-iot/tree/master/example">micropython-watson-iot examples</a>; you can use adapt these patterns to your own code.</p>
</blockquote>
<p>There’s really <em>a lot</em> more going on here than just MQTT—dashboards and gateways and all sorts of hoodoo that I am not going to go into.  But now it’s easy to use with MicroPython on an ESP32, thanks to ME.</p>
<p>Ahem…</p>
<h2 id="recapobligatorylinkdumpgoodbyes">Recap, Obligatory Link Dump, &amp; Goodbyes</h2>
<p>In this tutorial, we’ve learned:</p>
<ol>
<li>What MQTT is (and what it’s for)</li>
<li>How to talk to an MQTT broker using MicroPython and an ESP32</li>
<li>How to publish structured data</li>
<li>Install MicroPython libraries from PyPi via <code>upip</code></li>
<li>How to subscribe to simple topics via a standalone MQTT client</li>
<li>How to publish data to Watson IoT Platform via its <a href="https://quickstart.internetofthings.ibmcloud.com/#/">Quickstart</a> site, using <a href="https://github.com/boneskull/micropython-watson-iot">micropython-watson-iot</a></li>
</ol>
<p><a href="https://github.com/boneskull/micropython-watson-iot/blob/master/README.md">Check out the README of <code>micropython-watson-iot</code></a> for more info on usage and discussion of its limitations.</p>
<p>I’ve posted the complete example files <a href="https://gist.github.com/boneskull/1f5ae354815c6db5b1cb05ad2cb6232b">in this Gist</a> for your convenience.</p>
<p>Thanks for reading!  Extra thanks for <em>doing</em>, too.</p>
]]></content:encoded></item><item><title><![CDATA[Get on the Good Foot with MicroPython on the ESP32, Part 1 of 2]]></title><description><![CDATA[I’ll show you how to get up & running with MicroPython on the ESP32, connect to WiFi & upload scripts to the board, and read the ambient temperature.
]]></description><link>https://boneskull.com/micropython-on-esp32-part-1/</link><guid isPermaLink="false">5a4f12b0233e79218a31df0d</guid><category><![CDATA[micropython]]></category><category><![CDATA[python]]></category><category><![CDATA[esp32]]></category><category><![CDATA[tutorial]]></category><category><![CDATA[sensor]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Mon, 08 Jan 2018 16:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1503218751919-1ea90572e609?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;s=660a33f4b3c2951edd1a5a160125a4dd" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1503218751919-1ea90572e609?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&s=660a33f4b3c2951edd1a5a160125a4dd" alt="Get on the Good Foot with MicroPython on the ESP32, Part 1 of 2"><p>I’m going to show you how to <s>turn on your funk motor</s> get started with <a href="http://micropython.org">MicroPython</a> on an <a href="https://en.wikipedia.org/wiki/ESP32">Espressif ESP32</a> development board.  In this <em>first</em> part of this tutorial, I’ll show you how to:</p>
<ul>
<li>Get up &amp; running with MicroPython on the ESP32</li>
<li>Connect to WiFi</li>
<li>Upload scripts to the board</li>
<li>Read the ambient temperature (everyone loves that, right?)</li>
</ul>
<blockquote>
<p>In the forthcoming <em>second</em> part of this tutorial, I’ll show you publish the data you’ve collected with <a href="https://en.wikipedia.org/wiki/Mqtt">MQTT</a>.</p>
</blockquote>
<p>This guide expects you to possess:</p>
<ul>
<li>…familiarity with the command-line</li>
<li>…basic experience interfacing with development boards (like Arduino)</li>
<li>…a basic understanding of programming in <a href="https://en.wikipedia.org/wiki/Python_(programming_language)">Python</a></li>
</ul>
<p>If I’ve glossed over something I shouldn’t have, please <a href="mailto:boneskull@boneskull.com">let me know</a>!</p>
<p>Before we begin, you will need some stuff.</p>
<h2 id="billofstuff">Bill of Stuff</h2>
<p>You need Stuff in the following categories.</p>
<h3 id="hardware">Hardware</h3>
<p><img src="https://boneskull.com/content/images/2018/01/4051814934_0084607f54_z.jpg" alt="Get on the Good Foot with MicroPython on the ESP32, Part 1 of 2"><br>
<small>Not necessarily <em>this</em> stuff, but same idea.  Photo by <a href="https://flic.kr/p/7b3Bqb">Alexandra Cárdenas</a></small></p>
<ul>
<li>One (1) ESP32 development board such as the <a href="https://www.sparkfun.com/products/13907">SparkFun ESP32 Thing</a> (any kind will do; they are all roughly the same)</li>
<li>One (1) <a href="https://www.maximintegrated.com/en/products/analog/sensors-and-sensor-interface/DS18B20.html">DS18B20</a> digital thermometer (<a href="https://cdn.sparkfun.com/datasheets/Sensors/Temp/DS18B20.pdf">datasheet</a>) in its TO-92 package</li>
<li>One (1) 4.7kꭥ resistor</li>
<li>Four (4) <a href="https://en.wikipedia.org/wiki/Jump_wire">jumper wires</a></li>
<li>One (1) 400-point or larger breadboard</li>
<li>One (1) USB Micro-B cable</li>
</ul>
<p><em>If you need to solder header pins on your dev board:</em> do so.</p>
<p><em>If you have a DS18B20 “breakout board”:</em> these typically have the resistor built-in, so you won’t need it.  You <em>will</em> need to figure out which pin is which, however.</p>
<h3 id="software">Software</h3>
<p>You will need to download and install some software.  Some of these things you may have installed already.  Other things may need to be upgraded.  This guide assumes you <em>ain’t got jack squat</em>.</p>
<blockquote>
<p>I apologize that I don't have much information for Windows users! <em>However</em>, I assure you that none of this is impossible.</p>
</blockquote>
<h4 id="vcpdriver">VCP Driver</h4>
<p>If you're running macOS or Windows, you may need to download and install a Virtual COM Port (VCP) driver, if you haven't done so already.  Typically, the USB-to-serial chip on these boards is a <a href="https://www.silabs.com/products/development-tools/software/usb-to-uart-bridge-vcp-drivers">CP210x</a> or <a href="http://www.ftdichip.com/Drivers/VCP.htm">FT232RL</a>; check the datasheet for your specific board or just squint at the IC near the USB port.</p>
<blockquote>
<p>Newer Linux kernels have support for these chips baked-in, so driver installation is unnecessary.</p>
</blockquote>
<p>Here's an example of a CP2104 on an ESP32 dev board of mine:</p>
<p><img src="https://boneskull.com/content/images/2018/01/cp2104-1.jpg" alt="Get on the Good Foot with MicroPython on the ESP32, Part 1 of 2"> <small>A SiLabs CP2104.  Thanks, macro lens!</small></p>
<p>To assert the driver is working, plug your dev board into your computer.  If you’re on Linux, check for <code>/dev/ttyUSB0</code>:</p>
<pre><code class="language-bash">$ ls -l /dev/ttyUSB0
crw-rw---- 1 root dialout 188, 0 Dec 19 17:04 /dev/ttyUSB0
</code></pre>
<p>Or <code>/dev/tty.SLAB_USBtoUART</code> on macOS:</p>
<pre><code class="language-bash">$ ls -l /dev/tty.SLAB_USBtoUART
crw-rw-rw-  1 root  wheel   21,  20 Dec 19 17:10 /dev/tty.SLAB_USBtoUART
</code></pre>
<h4 id="serialterminal">Serial Terminal</h4>
<p>A free, cross-platform, GUI terminal is <a href="http://freeware.the-meiers.org/">CoolTerm</a>.  Linux &amp; macOS users can get away with using <code>screen</code> on the command line.  More purpose-built solutions include <code>miniterm</code>, which ships with Python 3, and can be launched via <code>python3 -m serial.tools.miniterm</code>, and <code>minicom</code>.</p>
<h4 id="pythonetc">Python, Etc.</h4>
<p>You will also need:</p>
<ul>
<li>Python v3.6.x</li>
<li>For extra libraries, a clone or archive of <a href="https://github.com/micropython/micropython-lib">micropython/micropython-lib</a> (<code>git clone https://github.com/micropython/micropython-lib</code>)</li>
</ul>
<p>How you install these will vary per your installation of Python:</p>
<ul>
<li>To flash the board, <a href="https://pypi.python.org/pypi/esptool/2.2">esptool</a> (version 2.2 or newer)</li>
<li>To manage files on the board, <a href="https://pypi.python.org/pypi/adafruit-ampy/1.0.3">adafruit-ampy</a></li>
</ul>
<p>You could try <code>pip3 install esptool adafruit-ampy</code>.  This worked for me on macOS with <a href="http://brew.sh">Homebrew</a>; YMMV.  You might need to preface that with <code>sudo</code> if not using Homebrew.</p>
<h4 id="micropythonfirmware">MicroPython Firmware</h4>
<p>Finally, you’ll need to download the <a href="https://micropython.org/download/#esp32">latest MicroPython firmware for ESP32</a>.</p>
<p>Now that our tools are at hand, we can begin by flashing the ESP32 board with MicroPython.</p>
<h2 id="flashingmicropythonfirststeps">Flashing MicroPython &amp; First Steps</h2>
<p>Unless MicroPython is <em>already</em> installed on your ESP32, you will want to start by connecting it to your computer via USB, and erasing its flash:</p>
<blockquote>
<p>In the below examples, replace <code>/dev/tty.SLAB_USBtoUART</code> with the appropriate device or COM port for your system.</p>
</blockquote>
<pre><code class="language-bash">$ esptool.py --chip esp32 -p /dev/tty.SLAB_USBtoUART erase_flash
esptool.py v2.2
Connecting........___
Chip is ESP32D0WDQ6 (revision 1)
Uploading stub...
Running stub...
Stub running...
Erasing flash (this may take a while)...
Chip erase completed successfully in 4.6s
Hard resetting...
</code></pre>
<p>Now, we can flash it with the <a href="https://micropython.org/download/#esp32">firmware</a> we downloaded earlier:</p>
<pre><code class="language-bash">$ esptool.py --chip esp32 -p /dev/tty.SLAB_USBtoUART write_flash \
  -z 0x1000 ~/Downloads/esp32-20171219-v1.9.2-445-g84035f0f.bin
esptool.py v2.2
Connecting........_
Chip is ESP32D0WDQ6 (revision 1)
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Auto-detected Flash size: 4MB
Compressed 936288 bytes to 587495...
Wrote 936288 bytes (587495 compressed) at 0x00001000 in 51.7 seconds (effective 144.8 kbit/s)...
Hash of data verified.

Leaving...
Hard resetting...
</code></pre>
<blockquote>
<p>If you’re feeling dangerous, you can increase the baud rate when flashing by using the <code>--baud</code> option.</p>
</blockquote>
<p>If that worked, you should be able to enter a MicroPython REPL by opening up the port:</p>
<pre><code class="language-bash"># 115200 is the baud rate at which the REPL communicates
$ screen /dev/tty.SLAB_USBtoUART 115200

&gt;&gt;&gt; 
</code></pre>
<p>Congratulations, <code>&gt;&gt;&gt;</code> is your REPL prompt.  This works similarly to a normal Python REPL (e.g. just running <code>python3</code> with no arguments). Try the <code>help()</code> function:</p>
<pre><code class="language-plain">&gt;&gt;&gt; help()
Welcome to MicroPython on the ESP32!

For generic online docs please visit http://docs.micropython.org/

For access to the hardware use the 'machine' module:

import machine
pin12 = machine.Pin(12, machine.Pin.OUT)
pin12.value(1)
pin13 = machine.Pin(13, machine.Pin.IN, machine.Pin.PULL_UP)
print(pin13.value())
i2c = machine.I2C(scl=machine.Pin(21), sda=machine.Pin(22))
i2c.scan()
i2c.writeto(addr, b'1234')
i2c.readfrom(addr, 4)

Basic WiFi configuration:

import network
sta_if = network.WLAN(network.STA_IF); sta_if.active(True)
sta_if.scan()                             # Scan for available access points
sta_if.connect(&quot;&lt;AP_name&gt;&quot;, &quot;&lt;password&gt;&quot;) # Connect to an AP
sta_if.isconnected()                      # Check for successful connection

Control commands:
  CTRL-A        -- on a blank line, enter raw REPL mode
  CTRL-B        -- on a blank line, enter normal REPL mode
  CTRL-C        -- interrupt a running program
  CTRL-D        -- on a blank line, do a soft reset of the board
  CTRL-E        -- on a blank line, enter paste mode

For further help on a specific object, type help(obj)
For a list of available modules, type help('modules')
</code></pre>
<p>If you’ve never seen this before on an MCU: <em>I know</em>, crazy, right?</p>
<p>You can type in the commands from “Basic WiFi configuration” to connect.  You will see a good deal of debugging information from the ESP32 (this can be suppressed, as you’ll see):</p>
<pre><code class="language-plain">&gt;&gt;&gt; import network
&gt;&gt;&gt; sta_if = network.WLAN(network.STA_IF)
I (323563) wifi: wifi firmware version: 111e74d
I (323563) wifi: config NVS flash: enabled
I (323563) wifi: config nano formating: disabled
I (323563) system_api: Base MAC address is not set, read default base MAC address from BLK0 of EFUSE
I (323573) system_api: Base MAC address is not set, read default base MAC address from BLK0 of EFUSE
I (323593) wifi: Init dynamic tx buffer num: 32
I (323593) wifi: Init data frame dynamic rx buffer num: 64
I (323593) wifi: Init management frame dynamic rx buffer num: 64
I (323603) wifi: wifi driver task: 3ffe1584, prio:23, stack:4096
I (323603) wifi: Init static rx buffer num: 10
I (323613) wifi: Init dynamic rx buffer num: 0
I (323613) wifi: Init rx ampdu len mblock:7
I (323623) wifi: Init lldesc rx ampdu entry mblock:4
I (323623) wifi: wifi power manager task: 0x3ffe84b0 prio: 21 stack: 2560
W (323633) phy_init: failed to load RF calibration data (0x1102), falling back to full calibration
I (323793) phy: phy_version: 362.0, 61e8d92, Sep  8 2017, 18:48:11, 0, 2
I (323803) wifi: mode : null
&gt;&gt;&gt; sta_if.active(True)
I (328553) wifi: mode : sta (30:ae:a4:27:d4:88)
I (328553) wifi: STA_START
True
&gt;&gt;&gt; sta_if.scan()
I (389423) network: event 1
[(b'SON OF ZOLTAR', b&quot;`\xe3'\xcf\xf4\xf5&quot;, 1, -57, 4, False), (b'CenturyLink6105', b'`1\x97%\xd9t', 1, -96, 4, False)]
&gt;&gt;&gt; sta_if.connect('SON OF ZOLTAR', '&lt;REDACTED&gt;')
&gt;&gt;&gt; I (689573) wifi: n:1 0, o:1 0, ap:255 255, sta:1 0, prof:1
I (690133) wifi: state: init -&gt; auth (b0)
I (690133) wifi: state: auth -&gt; assoc (0)
I (690143) wifi: state: assoc -&gt; run (10)
I (690163) wifi: connected with SON OF ZOLTAR, channel 1
I (690173) network: event 4
I (691723) event: sta ip: 10.0.0.26, mask: 255.255.255.0, gw: 10.0.0.1
I (691723) network: GOT_IP
I (693143) wifi: pm start, type:0

&gt;&gt;&gt; sta_if.isconnected()
True
</code></pre>
<p>Cool, huh?</p>
<p>Now that we know we can connect to WiFi, let’s have the board connect every time it powers up.</p>
<h2 id="creatingamicropythonmodule">Creating a MicroPython Module</h2>
<p>To perform tasks upon boot, MicroPython wants you to put code in a file named <code>boot.py</code>, which is a MicroPython module.</p>
<p>Let’s create <code>boot.py</code> with code modified from <a href="http://docs.micropython.org/en/latest/esp8266/esp8266/tutorial/network_basics.html">the MicroPython ESP8266 docs</a>, replacing where indicated:</p>
<pre><code class="language-python">def connect():
    import network
    sta_if = network.WLAN(network.STA_IF)
    if not sta_if.isconnected():
        print('connecting to network...')
        sta_if.active(True)
        sta_if.connect('&lt;YOUR WIFI SSID&gt;', '&lt;YOUR WIFI PASS&gt;')
        while not sta_if.isconnected():
            pass
    print('network config:', sta_if.ifconfig())
</code></pre>
<p>We can also create a function to disable debugging output.  Append to <code>boot.py</code>:</p>
<pre><code class="language-python">def no_debug():
    import esp
    # this can be run from the REPL as well
    esp.osdebug(None)
</code></pre>
<p>These functions will be <em>defined</em> at boot, but not called automatically.     Let’s test them before making them automatically execute.</p>
<p>To do this, we can upload <code>boot.py</code>.  You’ll need to close the connection to the serial port.  If you’re using <code>screen</code>, type <code>Ctrl-A Ctrl-\</code>, then <code>y</code> to confirm; otherwise disconnect or just quit your terminal program.</p>
<h2 id="uploadingamicropythonmodule">Uploading a MicroPython Module</h2>
<p>Though there are other ways to do this, I’ve found the most straightforward for the ESP32 is to use <a href="https://github.com/adafruit/ampy">ampy</a>, a general-purpose tool by <a href="https://adafruit.org">Adafruit</a>.  Here’s what it can do:</p>
<pre><code class="language-bash">$ ampy --help

Usage: ampy [OPTIONS] COMMAND [ARGS]...

  ampy - Adafruit MicroPython Tool

  Ampy is a tool to control MicroPython boards over a serial
  connection.  Using ampy you can manipulate files on the board's
  internal filesystem and even run scripts.

Options:
  -p, --port PORT  Name of serial port for connected board.  Can
                   optionally specify with AMPY_PORT environemnt
                   variable.  [required]
  -b, --baud BAUD  Baud rate for the serial connection (default
                   115200).  Can optionally specify with AMPY_BAUD
                   environment variable.
  --version        Show the version and exit.
  --help           Show this message and exit.

Commands:
  get    Retrieve a file from the board.
  ls     List contents of a directory on the board.
  mkdir  Create a directory on the board.
  put    Put a file or folder and its contents on the...
  reset  Perform soft reset/reboot of the board.
  rm     Remove a file from the board.
  rmdir  Forcefully remove a folder and all its...
  run    Run a script and print its output.
</code></pre>
<p>MicroPython stores files (scripts or anything else that fits) in a very basic filesystem.  By default, an empty <code>boot.py</code> should exist already.  To list the files on your board, execute:</p>
<pre><code class="language-bash">$ ampy -p /dev/tty.SLAB_USBtoUART ls
boot.py
</code></pre>
<p>Using the <code>get</code> command will echo a file’s contents to your shell (which could be piped to a file, if you wish):</p>
<pre><code class="language-bash">$ ampy -p /dev/tty.SLAB_USBtoUART get boot.py
# This file is executed on every boot (including wake-boot from deepsleep)
</code></pre>
<p>We can overwrite it with our own <code>boot.py</code>:</p>
<pre><code class="language-bash">$ ampy -p /dev/tty.SLAB_USBtoUART put boot.py
</code></pre>
<p>And retrieve it to see that it overwrote the default <code>boot.py</code>:</p>
<pre><code class="language-bash">$ ampy -p /dev/tty.SLAB_USBtoUART get boot.py
def connect():
    import network
    sta_if = network.WLAN(network.STA_IF)
    if not sta_if.isconnected():
        print('connecting to network...')
        sta_if.active(True)
        sta_if.connect('&lt;YOUR WIFI SSID&gt;', '&lt;YOUR WIFI PASS&gt;')
        while not sta_if.isconnected():
            pass
    print('network config:', sta_if.ifconfig())

def no_debug():
    import esp
    # this can be run from the REPL as well
    esp.osdebug(None)
</code></pre>
<p>Success!  This is the gist of uploading files with <code>ampy</code>.  You can also upload entire folders, as we’ll see later.</p>
<p>From here, we can open our REPL again, and run our code.  No need to restart the board!</p>
<h2 id="runningamicropythonmodule">Running a MicroPython Module</h2>
<p><strong>In following examples, I will eliminate the command prompt (<code>&gt;&gt;&gt;</code>) from code run in a REPL, for ease of copying &amp; pasting.</strong></p>
<p>Re-connect to the REPL.</p>
<pre><code class="language-bash">$ screen /dev/tty.SLAB_USBtoUART 115200
</code></pre>
<p>First, we’ll disconnect from WiFi:</p>
<pre><code class="language-python">import network
sta_if = network.WLAN(network.STA_IF)
sta_if.disconnect()
</code></pre>
<p>Debug output follows:</p>
<pre><code class="language-plain">I (3299583) wifi: state: run -&gt; init (0)
I (3299583) wifi: n:1 0, o:1 0, ap:255 255, sta:1 0, prof:1
I (3299583) wifi: pm stop, total sleep time: 0/-1688526567
I (3299583) wifi: STA_DISCONNECTED, reason:8
</code></pre>
<p>Then, we can <code>import</code> the <code>boot</code> module.  This will make our <code>connect</code> and <code>no_debug</code> functions available.</p>
<pre><code class="language-python">import boot
connect()
</code></pre>
<p>Output:</p>
<pre><code class="language-plain">connecting to network...
I (87841) wifi: n:1 0, o:1 0, ap:255 255, sta:1 0, prof:1
I (88401) wifi: state: init -&gt; auth (b0)
I (88401) wifi: state: auth -&gt; assoc (0)
I (88411) wifi: state: assoc -&gt; run (10)
I (88441) wifi: connected with SON OF ZOLTAR, channel 1
I (88441) network: event 4
I (90081) event: sta ip: 10.0.0.26, mask: 255.255.255.0, gw: 10.0.0.1
I (90081) network: GOT_IP
network config: ('10.0.0.26', '255.255.255.0', '10.0.0.1', '10.0.0.1')
I (91411) wifi: pm start, type:0
</code></pre>
<p>Super.  Let’s silence the noise, and try again:</p>
<pre><code class="language-python">no_debug()
sta_if.disconnect()
connect()
</code></pre>
<p>Output:</p>
<pre><code class="language-plain">connecting to network...
network config: ('10.0.0.26', '255.255.255.0', '10.0.0.1', '10.0.0.1')
</code></pre>
<p>LGTM.</p>
<blockquote>
<p>The IP addresses above depend upon your local network configuration, and will likely be different.</p>
</blockquote>
<p>Disconnect from the port (if using <code>screen</code>: <code>Ctrl-A Ctrl-\</code>, <code>y</code>) and append these lines to <code>boot.py</code>:</p>
<pre><code class="language-python">no_debug()
connect()
</code></pre>
<p>Upload it again via <code>ampy put boot.py</code>, which will overwrite the existing <code>boot.py</code>.  Hard reset (“push the button”) or otherwise power -cycle the board.  Reconnect to the REPL and execute <code>connect()</code> to assert connectivity:</p>
<pre><code class="language-python">connect()
</code></pre>
<p>Output:</p>
<pre><code class="language-plain">network config: ('10.0.0.26', '255.255.255.0', '10.0.0.1', '10.0.0.1')
</code></pre>
<p>You’ll notice “connecting to network...” was not printed to the console;  if already connected, the <code>connect()</code> function prints the configuration and returns.  If you’ve gotten this far, then your board is successfully connecting to Wifi at boot.  Good job!</p>
<p>We now have two more items to check off our list, unless you forgot what we were trying to do:</p>
<ol>
<li>We need to read the ambient temperature on an interval.</li>
<li>We need to publish this information to an MQTT broker.</li>
</ol>
<p>Next, we’ll knock out that temperature reading.</p>
<h2 id="temperaturereadingsinmicropython">Temperature Readings in MicroPython</h2>
<p>As we write our code, we can use the REPL to experiment.</p>
<p>I’m using the example <a href="https://docs.micropython.org/en/latest/esp8266/esp8266/tutorial/onewire.html#controlling-1-wire-devices">found here</a>.  You’ll need to import three (3) modules, <code>machine</code>, <code>onewire</code> and <code>ds18x20</code> (note the <code>x</code>):</p>
<pre><code class="language-python">import machine, onewire, ds18x20
</code></pre>
<p>I’ve connected my sensor to pin 12 on my ESP32.  Your breadboard should look something like this:</p>
<p><img src="https://boneskull.com/content/images/2018/01/esp32-ds18b20_bb.png" alt="Get on the Good Foot with MicroPython on the ESP32, Part 1 of 2"><br>
<small>Example breadboard wiring for ESP32 dev board and DS18B20</small></p>
<p>To read temperature, we will create a <a href="https://en.wikipedia.org/wiki/Matryoshka_doll">Matryoshka-doll</a>-like object by passing a <code>Pin</code> instance into a <code>OneWire</code> constructor (read about <a href="https://en.wikipedia.org/wiki/1-Wire">1-Wire</a>) and finally into a <code>DS18X20</code> constructor:</p>
<pre><code class="language-python">pin = machine.Pin(12)
wire = onewire.OneWire(pin)
ds = ds18x20.DS18X20(wire)
</code></pre>
<blockquote>
<p>Note that if the output of the following command is an empty list (<code>[]</code>), the sensor couldn't be found.  Check your wiring!</p>
</blockquote>
<p>Now, we can ask <code>ds</code> to scan for connected devices, and return their addresses:</p>
<pre><code class="language-python">ds.scan()
</code></pre>
<p>Output:</p>
<pre><code class="language-plain">[bytearray(b'(\xee3\x0c&quot;\x15\x004')]
</code></pre>
<p><code>ds.scan()</code> returns a <code>list</code> of device addresses in <code>bytearray</code> format.  Yours may look slightly different.  Since we only have one, we can save its address to a variable.  To read temperature data, we tell the 1-Wire bus to reset via <code>ds.convert_temp()</code>, take a short pause of 750ms (in case you're pasting this):</p>
<pre><code class="language-python">import time
addr = ds.scan().pop()
ds.convert_temp()
time.sleep_ms(750)
temp = ds.read_temp(addr)
temp
</code></pre>
<p>Output:</p>
<pre><code class="language-plain">19.875
</code></pre>
<p>This reading is in Celsius.  If you’re like me, you don’t speak Celsius, so maybe you want to convert it to Fahrenheit:</p>
<pre><code class="language-python">(temp * 1.8) + 32
</code></pre>
<p>Output:</p>
<pre><code class="language-plain">67.775
</code></pre>
<p>…which is right around what I expected!</p>
<p>Let’s take what we’ve done and create a new file, <code>temperature.py</code>:</p>
<pre><code class="language-python">import time
from machine import Pin
from onewire import OneWire
from ds18x20 import DS18X20


class TemperatureSensor:
    &quot;&quot;&quot;
    Represents a Temperature sensor
    &quot;&quot;&quot;
    def __init__(self, pin):
        &quot;&quot;&quot;
        Finds address of single DS18B20 on bus specified by `pin`
        :param pin: 1-Wire bus pin
        :type pin: int
        &quot;&quot;&quot;
        self.ds = DS18X20(OneWire(Pin(pin)))
        addrs = self.ds.scan()
        if not addrs:
            raise Exception('no DS18B20 found at bus on pin %d' % pin)
        # save what should be the only address found
        self.addr = addrs.pop()

    def read_temp(self, fahrenheit=True):
        &quot;&quot;&quot;
        Reads temperature from a single DS18X20
        :param fahrenheit: Whether or not to return value in Fahrenheit
        :type fahrenheit: bool
        :return: Temperature
        :rtype: float
        &quot;&quot;&quot;
        self.ds.convert_temp()
        time.sleep_ms(750)
        temp = self.ds.read_temp(self.addr)
        if fahrenheit:
            return self.c_to_f(temp)
        return temp

    @staticmethod
    def c_to_f(c):
        &quot;&quot;&quot;
        Converts Celsius to Fahrenheit
        :param c: Temperature in Celsius
        :type c: float
        :return: Temperature in Fahrenheit
        :rtype: float
        &quot;&quot;&quot;
        return (c * 1.8) + 32

</code></pre>
<p>Disconnect from the REPL.  Upload <code>temperature.py</code> via <code>ampy</code>:</p>
<pre><code class="language-bash">$ ampy -p /dev/tty.SLAB_USBtoUART put temperature.py
</code></pre>
<p>Then we can open our REPL once again, and try it:</p>
<pre><code class="language-python">from temperature import TemperatureSensor
t = TemperatureSensor(12)
t.read_temp() # use t.read_temp(False) to return Celsius
</code></pre>
<p>Seems to have warmed up a bit.  Output:</p>
<pre><code class="language-plain">68.7875
</code></pre>
<p>Good work!</p>
<h2 id="conclusionofpartone1">Conclusion of Part One (1)</h2>
<p>In the first part of this tutorial, we’ve learned how to:</p>
<ol>
<li>Flash an ESP32 dev board with MicroPython</li>
<li>Use MicroPython’s REPL to experiment</li>
<li>Connect the ESP32 to WiFi</li>
<li>Upload and execute MicroPython scripts</li>
<li>Read the temperature with a 1-Wire DS18B20 sensor</li>
</ol>
<p>In the forthcoming <em>second</em> part of this tutorial, we’ll learn about MQTT, how to publish our temperature data to an MQTT broker, and likewise interface with an MQTT-based cloud “IoT platform”.</p>
]]></content:encoded></item><item><title><![CDATA[From millis() to MicroPython: Arduino for Web Developers]]></title><description><![CDATA[One web developer's journey into high-level languages and hobby electronics via Arduino, Johnny-Five, ESP8266, NodeMCU, MicroPython and the ESP32. ]]></description><link>https://boneskull.com/from-millis-to-micropython/</link><guid isPermaLink="false">59e67a62cf87f519127cc175</guid><category><![CDATA[python]]></category><category><![CDATA[micropython]]></category><category><![CDATA[esp8266]]></category><category><![CDATA[node.js]]></category><category><![CDATA[arduino]]></category><category><![CDATA[esp32]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Thu, 30 Nov 2017 14:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1508483615040-ef698af7730e?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;s=58d035ef49b6c4021a0ee5fb67c3c779" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1508483615040-ef698af7730e?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&s=58d035ef49b6c4021a0ee5fb67c3c779" alt="From millis() to MicroPython: Arduino for Web Developers"><p>In late 2013, I received a <a href="https://www.sparkfun.com/products/13969">SparkFun Inventor’s Kit</a> as a non-denominational holiday present from a coworker.  Without hyperbole, <em>this was the best present I’ve ever received</em> (thanks Anthony).</p>
<p>Before this, I knew <em>nothing</em> of electronics, hardware, or firmware.  The <a href="https://www.sparkfun.com/products/13975">SparkFun RedBoard</a> (an <a href="https://store.arduino.cc/arduino-uno-rev3">Arduino Uno</a>-compatible device) and its guidebook exposed to me the vast world of physical computing.  Fascinated with the possibilities, I soon found myself a dedicated hobbyist.</p>
<p><img src="https://boneskull.com/content/images/2017/11/sparkfun-redboard.jpg" alt="From millis() to MicroPython: Arduino for Web Developers"><small>My introduction to hardware: The SparkFun RedBoard</small></p>
<p>As a “full stack” web developer by trade, my background is in higher-level languages—mainly JavaScript and Python.  I was ripping out my <em>luxuriant locks</em> writing anything more than the most basic “sketch” in C/C++. <em>Granted</em>, the easy was easy.  But it’s like <strong>putting lipstick on a pig</strong>.  Just beneath the façade of <code>digitalWrite</code> and <code>void loop</code> is the <em>seamy underbelly</em> of C/C++, wallowing in its own <em>filth</em>. 🐷</p>
<p>Stepping out of my comfort zone into physical computing was <em>more than enough</em> of a challenge.  I could certainly <em>get by</em>—since 2013, I’ve written more C/C++ than I had in the previous twelve years.  And I realize a “close to the metal” language is often necessary for embedded systems.  But it wasn’t <em>fun</em>.</p>
<p>What follows is the story of how <em>this</em> web developer started to have <em>fun</em> writing firmware.  YMMV.</p>
<h2 id="programminganarduinousingjohnnyfivenodejs">Programming an Arduino Using Johnny-Five &amp; Node.js</h2>
<p>In the name of “fun”, I sought out a higher-level language.  I found <a href="http://nodebots.io/">NodeBots</a> and Rick Waldron’s <a href="http://johnny-five.io">Johnny-Five</a> project.</p>
<p>Opening a <a href="https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop">REPL</a> w/ Johnny-Five and interacting with the hardware &quot;live&quot; was more than the Arduino IDE could offer (not saying much! zing!).</p>
<p>There's a catch, of course.  J5 requires a serial connection or a board which can run <a href="https://nodejs.org">Node.js</a> natively.  A tall order to be sure: this requires a <a href="https://nodejs.org/en/download/">supported OS</a> and considerable memory resources.  In other words, <em>not</em> an 8-bit MCU like the Uno’s <a href="http://www.microchip.com/wwwproducts/en/ATmega328P">ATmega328p</a>.</p>
<p>A Linux-capable device satisfies requirements for many projects.  For example, J5 is an excellent choice when working with GPIO on a <a href="https://raspberrypi.org">Raspberry Pi</a> or <a href="https://tessel.io">Tessel</a>.  It’s “just Node.js”—with excellent documentation and examples, right down to the <a href="http://fritzing.org">Fritzing</a> diagrams—and for anyone familiar with JavaScript, it’s trivial to learn (and not much more difficult to hack on).</p>
<p>Meanwhile, in the summer of ’14, while I was busy hacking on Johnny-Five, an unknown manufacturer out of Shanghai jumped into the pool with a tiny <a href="https://en.wikipedia.org/wiki/System_on_a_chip">SoC</a>—and it hit the water like a cannonball.</p>
<h2 id="theesp8266haslanded">The ESP8266 Has Landed</h2>
<p><a href="https://hackaday.com/2014/08/26/new-chip-alert-the-esp8266-wifi-module-its-5/">This Hackaday post</a> was the first introduction for many to the ESP-01 module, built with the <a href="https://en.wikipedia.org/wiki/ESP8266">ESP8266</a> MCU from <a href="https://espressif.com/">Espressif</a>.  Its price point was—and still is—<em>profoundly</em> lower than any other WiFi-capable SoC.</p>
<blockquote>
<p>Texas Instruments’ <a href="http://www.ti.com/product/CC3100">CC3000</a>, introduced in 2013, was likely the  lowest-cost module available at the time.  It gained a <a href="http://forum.espruino.com/conversations/918/">reputation</a> as an unreliable pest—and even today, you’re looking at <a href="https://www.digikey.com/product-detail/en/CC3000BOOST/296-35617-ND/3854678?utm_campaign=buynow&amp;WT.z_cid=ref_octopart_dkc_buynow&amp;utm_medium=aggregator&amp;curr=usd&amp;site=us&amp;utm_source=octopart">nearly $40</a> USD for an obsolete eval board.</p>
</blockquote>
<p>The ESP8266 threw the doors open for hobbyists to dabble in IoT and home automation.  This thing is now everywhere.  Better yet, the ESP8266 has since proven to be a highly <a href="https://tech.scargill.net/esp8266-lessons-learned/">reliable</a> module.</p>
<blockquote>
<p>Johnny-Five <em>can</em> drive an ESP8266 via WiFi (with some effort), but the device will still be tethered, and will fail if the connection is interrupted.</p>
</blockquote>
<p>At first, it was a bit of a curiosity.  In 2014, docs or educational resources in English were nearly nonexistent.  We'd wire the ESP-01 module to an Arduino, and use it as a serial bridge over WiFi, using Hayes-like <a href="https://en.wikipedia.org/wiki/Hayes_command_set"><code>AT</code> commands</a> to control its behavior.</p>
<pre><code class="language-arduino">// example lifted from https://github.com/alokdhari/ESP8266

#include &lt;SoftwareSerial.h&gt;

SoftwareSerial mySerial(10, 11); // RX, TX

void setup() {
  // Open serial communications and wait for port to open:
  Serial.begin(9600);
  while (!Serial) {
    // busy wait!
  }


  Serial.println(&quot;Hello World&quot;);

  // set the data rate for the SoftwareSerial port
  mySerial.begin(9600);
  callWifiBoardWithCommand(&quot;AT&quot;, 1000);
  callWifiBoardWithCommand(&quot;AT+CWJAP=\&quot;MY-SSID\&quot;,\&quot;MY-PASSWORD\&quot;&quot;, 5000);
  callWifiBoardWithCommand(&quot;AT+CIPSTATUS&quot;, 1000);
  callWifiBoardWithCommand(&quot;AT+CIPSTART=\&quot;TCP\&quot;,\&quot;www.bonesukll.com\&quot;,80&quot;, 4000);
  String cmd = &quot;GET /index.html HTTP/1.1 Host: www.boneskull.com&quot;;
  
  callWifiBoardWithCommand(&quot;AT+CIPSEND=&quot; + String(cmd.length()), 0);
 
  callWifiBoardWithCommand(cmd, 0);
  
  while(mySerial.available()){
    if(mySerial.find('O')){
      mySerial.println(&quot;+IPD&quot;);
    }
  }
}

void loop() { // run over and over
  if (mySerial.available()) {
    Serial.write(mySerial.read());
  }
  if (Serial.available()) {
    mySerial.write(Serial.read());
  }
}

void callWifiBoardWithCommand(String command, int waitFor)
{
  delay(1000);
  mySerial.println(command);
  delay(waitFor);
  while(mySerial.available()){
    Serial.write(mySerial.read());
  }  
}
</code></pre>
<p><em>Good grief</em>.</p>
<p>We knew the ESP8266’s hardware was certainly <em>capable</em> of use as a project’s main controller, but the software wasn’t yet ready.</p>
<p>Mercifully, we wouldn't have to wait long.</p>
<h2 id="nodemcumakesprogress">NodeMCU Makes Progress</h2>
<p>Fast-forward some months (IIRC), and a native SDK appeared from Espressif.  We started seeing the potential of the ESP8266, instead of just providing serial-over-WiFi to a micro.</p>
<p>This native SDK powered an ambitious project, <a href="https://github.com/nodemcu/nodemcu-firmware">NodeMCU</a>.  It drove the ESP8266 by way of <a href="https://en.wikipedia.org/wiki/Lua_(programming_language)">Lua</a>, a scripting language often found in moddable video games like <a href="https://en.wikipedia.org/wiki/Hayes_command_set">WoW</a>.</p>
<p>It had an associated development board (also confusingly called “NodeMCU”), which provided easy flashing via a USB port; before the NodeMCU board, ESP8266-based devices were all simple, breadboard-hostile modules:</p>
<p><img src="https://boneskull.com/content/images/2017/11/ESP-01.jpg" alt="From millis() to MicroPython: Arduino for Web Developers"> <small>An ESP-01 needs a breadboard adapter, as the pins are all adjacent</small></p>
<p>They remained inhospitable even if you <em>did</em> manage to get them onto your breadboard:</p>
<p><img src="https://boneskull.com/content/images/2017/11/Flashing_Circuit_Schematic.jpg" alt="From millis() to MicroPython: Arduino for Web Developers"> <small>This circuit was really fiddly and tedious to flash</small></p>
<p>My early attempt at using an ESP-07 (and its adapter) to transmit temperature from an DS18B20 (<a href="https://datasheets.maximintegrated.com/en/ds/DS18B20.pdf">datasheet</a>) required considerable effort:</p>
<p><img src="https://boneskull.com/content/images/2017/11/21948021785_2c2a015130_o.jpg" alt="From millis() to MicroPython: Arduino for Web Developers"> <small>Four pins exposed for flashing &amp; serial comms; the jumper switches flashing mode</small></p>
<p>The NodeMCU board, on the other hand, was a complete package:</p>
<p><img src="https://boneskull.com/content/images/2017/11/Nodemcu_amica_bot_02.png" alt="From millis() to MicroPython: Arduino for Web Developers"> <small>Note USB Micro-B receptacle, reset button, onboard passive components</small></p>
<p>While the NodeMCU board lives on in countless clones, contributions to NodeMCU-the-project waned.  The community (and Espressif, which were wise to embrace open source) directed resources into the <a href="https://github.com/esp8266/arduino">Arduino on ESP8266</a> project, which provided a familiar framework, and allowed firmware authors to leverage the existing Arduino ecosystem.  Once this Arduino core became stable, many libraries “just worked” out-of-the-box, and those which didn't were easily ported via C preprocessor macros.</p>
<blockquote>
<p>Lua isn’t a “bad” language by any means—just relatively obscure.  I found the main drawback of the NodeMCU project was its small community; if the official project didn’t support a certain sensor or other component, you’d likely have to write that library yourself.</p>
</blockquote>
<p>While the ESP8266 community buzzed, another project out of the UK slowly gathered momentum: <a href="https://micropython.org">MicroPython</a>.</p>
<h2 id="micropythongetscrowdfunded">MicroPython Gets Crowdfunded</h2>
<p>MicroPython is a Python 3 (-based) runtime designed for deployment on microcontrollers.  Its genesis was a 2013 <a href="https://www.kickstarter.com/projects/214379695/micro-python-python-for-microcontrollers">Kickstarter campaign</a> featuring the <a href="https://micropython.org/">PyBoard</a>.</p>
<p>After the success of the original campaign (which eventually allowed its creator, <a href="http://dpgeorge.net/">Damien George</a>, to work on MicroPython full-time), Mr. George launched another in early 2016, <a href="https://www.kickstarter.com/projects/214379695/micropython-on-the-esp8266-beautifully-easy-iot">targeting our revered ESP8266</a>.  This well-timed campaign succeeded, despite the “reward” being little more than open-source software!</p>
<p>By the time it landed, many hackers already had ESP8266 boards in their clutches.  Now that the first port of the framework was public, it was just a matter of time before the community extended MicroPython support to other platforms:</p>
<ul>
<li><a href="https://github.com/adafruit/circuitpython">Adafruit ported it</a> to Atmel's ARM Cortex-M0+-based SAM D processors by way of a fork</li>
<li>The Micro:bit Educational Foundation (presumably) ported to the <a href="https://github.com/bbcmicrobit/micropython">BBC micro:bit</a>, which runs Nordic's popular <a href="https://www.nordicsemi.com/eng/Products/Bluetooth-low-energy/nRF51822">nRF51822</a> SoC for BLE</li>
<li>Pretty much my favorite: <a href="https://pycom.io">PyCom</a> helped develop <a href="https://github.com/micropython/micropython-esp32">a port</a> for Espressif's newer SoC, the <a href="https://en.wikipedia.org/wiki/ESP32">ESP32</a></li>
<li>A few others I'm unfamiliar with (please comment if you've used them!)</li>
</ul>
<p>MicroPython works well on ESP8266-based boards, as they are reasonably speedy at 80MHz.  However, the ESP8266 has only 160KiB of SRAM, less 64KiB for the bootloader, less MicroPython's <em>own</em> overhead.  You're really going to be squeezed here.  Speaking from experience, it’s easy to run out of memory via object allocation, resources left open, or even too many lines of code—via <code>import</code> statements or otherwise!</p>
<p>While ESP8266-based dev boards are cheap and plentiful (a good one can be purchased direct from China for under $3 USD), for a few dollars more—under $7 USD—you can get your meathooks on an ESP32-based board.</p>
<p>This is important because <strong>the ESP32 runs MicroPython like a boss</strong>.</p>
<h2 id="micropythonportedtotheesp32">MicroPython Ported to the ESP32</h2>
<p>The ESP32 isn’t a “successor” to the ESP8266; rather, it’s a step up: a SoC for more demanding applications.</p>
<p>From the chart below, you can see it's a far more capable device.  But the big differentiator for us is the available RAM, which I've circled:</p>
<p><img src="https://boneskull.com/content/images/2017/11/esp8266-vs-esp32.png" alt="From millis() to MicroPython: Arduino for Web Developers"> <small>It's all about the RAM.</small></p>
<p>This gives users of MicroPython much more flexibility--in fact, this is 320KiB more RAM than the PyBoard, which MicroPython was originally designed for!</p>
<p>In practice, this means you can stop worrying (as much) about how many modules you're <code>import</code>-ing, how many objects you're allocating, etc.</p>
<p>Is this <em>overkill</em>?  Is this <em>wasteful</em>?  Is it <em>rad</em>?  Yes, yes, and yes.</p>
<blockquote>
<p>As a library author, writing MicroPython for the ESP32 means you have the luxury of implementing better abstractions.  Without the extra RAM to play with, low-level APIs and cut corners are the name of the game.</p>
</blockquote>
<p>With the ESP32, hardware won’t be holding you back from using MicroPython.  <em>That said</em>, MicroPython and its ecosystem has “quirks” that Pythonistas need to understand before digging in.  I’ll cover a few below.</p>
<h2 id="micropythonhaspeculiarities">MicroPython Has Peculiarities</h2>
<p>If you are hoping to pull random module off of <a href="https://pypi.org">PyPi</a> and just <code>import</code> it, steel yourself for disappointment.  I have yet to find non-trivial code written for Python 3 that “just works” in MicroPython.  I ran into four (4) differences which were particularly problematic for Pythonic portability:</p>
<ol>
<li>MicroPython does not implement the entire Python standard library.  For example, much of <code>sys</code> doesn’t make sense in the context.  Metaprogramming tools like <code>typings</code> or <code>abc</code> don’t exist.  This doesn’t necessarily mean you can’t implement them yourself!</li>
<li>Of the standard library which it <em>does</em> implement, it wants to prefix those libraries with a <code>u</code> (e.g. <code>ujson</code>).</li>
<li>You cannot chain exceptions in MicroPython.  This means you cannot catch an exception and re-<code>raise</code> it as a different one.  All you can do with <code>try</code>/<code>except</code> is <em>eat</em> exceptions.</li>
<li>Subclassing builtins doesn’t always work the way you’d expect.</li>
</ol>
<p>In addition, the ecosystem is immature.  MicroPython lacks official guidelines and best practices for module authors.  Installation of modules on devices involves manually copying files around.  Fragmentation may become more of an issue due to various forks.</p>
<p>Better tooling and best practices will come in time, but we can still <em>get things done</em> and <em>have fun doing it</em> with MicroPython.  I can’t wait to rewrite a bunch of my custom firmware with it.  Not joking.</p>
<p>In my next post on the subject, I’ll walk through connecting a MicroPython-laden ESP32 to an MQTT broker.  Just keep hitting refresh.</p>
]]></content:encoded></item><item><title><![CDATA[Mocha v4 Nears Release]]></title><description><![CDATA[Mocha's next major release has breaking changes--see if your tests will be affected.]]></description><link>https://boneskull.com/mocha-v4-nears-release/</link><guid isPermaLink="false">59cdd83f077f7c45a43f00e5</guid><category><![CDATA[node.js]]></category><category><![CDATA[mocha]]></category><category><![CDATA[web]]></category><category><![CDATA[testing]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Fri, 29 Sep 2017 05:21:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1479568070344-4b8d663692b9?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;s=f259cb9f6897be0abe40040711dfaa6e" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1479568070344-4b8d663692b9?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&s=f259cb9f6897be0abe40040711dfaa6e" alt="Mocha v4 Nears Release"><p><a href="https://github.com/mochajs/mocha">Mocha</a> v4.0.0 is nearing release. With this new version also comes the obligatory <strong>breaking changes</strong>, and I'll enumerate them below.</p>
<p><strong>UPDATE (April 18, 2018):</strong> Mocha v4 was <a href="https://github.com/mochajs/mocha/releases/tag/v4.0.0">released</a> on October 2, 2017.</p>
<h2 id="mochawillnolongersupportnodejsprev400">Mocha Will No Longer Support Node.js Pre-v4.0.0</h2>
<p>There are several reasons for this:</p>
<ol>
<li>These versions of Node.js are no longer maintained and (some) have known security vulnerabilities <em>which will not receive official patches</em>.  Users of these versions should upgrade Node.js.  For example, v0.10.0 and v0.12.0 have been end-of-lifed since October and December 2016, respectively.</li>
<li>Mocha's <em>own</em> dependencies have already dropped support for these platforms, and so Mocha cannot address to security vulnerabilities or critical bug fixes in reasonable manner nor timeframe.</li>
<li>Mocha's development environment can no longer run out-of-the-box in a pre-v4.0.0 environment.</li>
</ol>
<p>The following unmaintained versions of Node.js <strong>will no longer be supported</strong> by Mocha:</p>
<ul>
<li>v0.10.0</li>
<li>v0.11.0</li>
<li>v0.12.0</li>
<li>iojs</li>
<li>v5.x</li>
</ul>
<blockquote>
<p>Node.js v5.x is <em>likely</em> to work as long as Node.js v4.x does, but will not be in Mocha's <a href="https://travis-ci.org/mochajs/mocha">build matrix</a>.</p>
</blockquote>
<p>It's important to note that Mocha v3.x <em>will still work</em> on these platforms, but users can no longer expect upgrades.</p>
<blockquote>
<p>For further reading, see <a href="https://medium.com/@eranhammer/on-being-operationally-incompetent-4ca4fbccbf98">this article by Eran Hammer</a> and also the <a href="https://github.com/nodejs/Release#readme">Node.js Release Schedule</a>.</p>
</blockquote>
<p>The above change <em>also</em> means:</p>
<h2 id="mochawillnolongersupportnpmolderthanv21511">Mocha Will No Longer Support <code>npm</code> Older Than v2.15.11</h2>
<p>Reasoning:</p>
<ul>
<li><code>npm</code> v2.15.11 is the version which shipped with Node.js v4.0.0.</li>
<li>Even older versions didn't support scoped packages (<code>@foo/bar</code>), nor the caret (<code>^</code>) semver range specifier.  This made it virtually impossible to include production dependencies which used <em>either</em> of these features.</li>
</ul>
<p>Users are encouraged to upgrade to the latest version of <code>npm</code>.</p>
<p>Users of <a href="https://bower.io">Bower</a> are <em>also</em> encouraged to upgrade, because:</p>
<p><img src="https://images.unsplash.com/photo-1430825803925-53e62bb14db1?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;s=4d6bbdad8bd4766dcdec5216637a41b4" alt="Mocha v4 Nears Release"><br>
<small><em>Bower rides into the sunset.</em>  Photo by <a href="https://unsplash.com/@alwig64?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Alex Wigan</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></small></p>
<h2 id="mochawillnolongersupportbower">Mocha Will No Longer Support Bower</h2>
<blockquote>
<p><s>This may or may not make it into v4.</s></p>
<p><strong>UPDATE (April 18, 2018):</strong> This made it.</p>
</blockquote>
<p>Bower is an excellent tool to install front-end dependencies.  However, over the years, more robust solutions have evolved.  Its time has passed.</p>
<p>Mocha v4 will remove its &quot;browser bundle&quot; (<code>mocha.js</code>) from version control.  This means <code>bower</code> will not install Mocha v4 out-of-the-box.  There may be a workaround by means of a &quot;<a href="https://www.npmjs.com/package/bower-npm-resolver">resolver</a>&quot;, but YMMV.</p>
<h3 id="awarningtothosebundlingmocha">A Warning To Those Bundling Mocha</h3>
<p>This change <em>may also</em> impact those users bundling Mocha themselves via <code>browserify</code>, <code>webpack</code>, etc.  The <code>browser</code> field will now point to <code>mocha.js</code>, which is <em>already</em> bundled.  You may no longer need to bundle yourself, <em>or</em> you may need to find a different way to do it.  <em>If you cannot find a reasonable workaround, please open an issue on <a href="https://github.com/mochajs/mocha/new">our tracker</a>.</em></p>
<p>Finally, my favorite:</p>
<h2 id="mochawillnolongersupportnones5compliantenvironments">Mocha Will No Longer Support Non-ES5-Compliant Environments</h2>
<p>Mocha is likely the last major actively maintained testing framework to officially support these browsers.</p>
<p>While Mocha's maintainers have proudly sustained the <em>poor, piteous developers</em> who have such business (or government!) requirements, the overhead of retaining compatibility has become an albatross.</p>
<p>Much like supporting older versions of Node.js, this has severely limited the project's agility.  Mocha should shed some weight after this, as it contains a plethora of handrolled shims.</p>
<p>The following browser environments <strong>will no longer be supported</strong> in Mocha v4:</p>
<ul>
<li>Internet Explorer 7</li>
<li>Internet Explorer 8</li>
<li>PhantomJS 1.x</li>
</ul>
<p>API and output changes follow.</p>
<h2 id="otherbreakingchanges">Other Breaking Changes</h2>
<p><strong>Read this</strong> to save yourself some time.</p>
<h3 id="mochawontforceexit">Mocha Won't Force Exit</h3>
<blockquote>
<p>This may or may not make it into v4.</p>
<p><strong>UPDATE (April 18, 2018):</strong> This made it.</p>
</blockquote>
<p>To avoid false positives and encourage better testing practices, Mocha will no longer <a href="https://github.com/mochajs/mocha/issues/2879">automatically kill itself</a> via <code>process.exit()</code> when it thinks it should be done running.</p>
<p>If the <code>mocha</code> process is still alive after your tests seem &quot;done&quot;, then your tests have scheduled something to happen (asynchronously) and <em>haven't cleaned up after themselves properly</em>.  Did you leave a socket open?</p>
<p>Supply the <code>--exit</code> flag to use pre-v4 behavior.</p>
<p>If you're having trouble figuring out where the hangup is, <a href="https://www.npmjs.com/package/wtf-node">this package</a> might help.</p>
<p><strong>UPDATE (April 17, 2018):</strong> I changed the link above; I've since had better success using <a href="https://npm.im/wtfnode">wtfnode</a> to debug this class of problems.  Be sure to run it against the <code>_mocha</code> executable (not <code>mocha</code>)!</p>
<p><strong>UPDATE (April 23, 2018):</strong> Fixed link to <code>wtfnode</code>.</p>
<h3 id="output">Output</h3>
<p>There will be a few changes to reporters, which may negatively affect projects consuming Mocha's output directly:</p>
<ul>
<li>The &quot;unified diff&quot; <a href="https://github.com/mochajs/mocha/issues/2295">will now contain separators</a>, as it has been difficult to read.</li>
<li>Upon error, the test contexts <a href="https://github.com/mochajs/mocha/pull/2814">will be indented</a> instead of smooshed into a single line.</li>
</ul>
<h2 id="forward">Forward!</h2>
<p>By shedding support for older environments, Mocha becomes more nimble, and positions itself to leverage the present and future innovations of Node.js and JavaScript.</p>
]]></content:encoded></item><item><title><![CDATA[DIY Object Recognition with Raspberry Pi, Node.js, & Watson]]></title><description><![CDATA[A glorious thing nowadays is that you needn't be an AI researcher to leverage machine learning.  

Let's roll our own custom object recognition solution with Raspberry Pi, Node.js, and Watson.]]></description><link>https://boneskull.com/diy-object-recognition/</link><guid isPermaLink="false">59b701fad4838e2a6964d95f</guid><category><![CDATA[node.js]]></category><category><![CDATA[watson]]></category><category><![CDATA[raspberry-pi]]></category><category><![CDATA[ai]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Tue, 12 Sep 2017 15:00:00 GMT</pubDate><media:content url="https://boneskull.com/content/images/2017/09/IMG_3995.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://boneskull.com/content/images/2017/09/IMG_3995.jpg" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"><p>A glorious thing nowadays is that you needn't be an AI researcher nor have expensive hardware to leverage machine learning in your projects.</p>
<p>Granted, a domain-specific design will net greater benefits in the long run.  Yet, until recently, a general-purpose, off-the-shelf solution wasn't easily consumable by your average developer (that's me).  Nor was such a monster  available—by virtue of APIs—to resource-constrained devices.</p>
<p>Below, I'll introduce the reader (that's you) to API-based object recognition, and how to implement with cheap hardware and JavaScript.</p>
<h2 id="theraspberrypizerow">The Raspberry Pi Zero W</h2>
<p>Firstly, you will need an internet-enabled Raspberry Pi.</p>
<p>For this project, the most value you'll get for your money is probably a <a href="https://www.raspberrypi.org/products/raspberry-pi-zero-w/">Raspberry Pi Zero W</a>.</p>
<blockquote>
<p>Got a different Raspberry Pi?</p>
<p>Most RPi boards have a camera interface.  A RPi Zero v1.3 (the <em>non</em>-WiFi one with the camera interface) will also need a USB WiFi dongle, Ethernet adapter, or &quot;hat&quot; providing connectivity.</p>
<p>The &quot;original&quot; RPi Zero, v1.2, does <em>not</em> have a camera interface, and will not work.</p>
</blockquote>
<p>While the Zero isn't fast, it can run Linux, which makes it more capable than your garden-variety microcontroller.  As you can see, it huffs &amp; puffs to execute a Node.js &quot;useless script&quot;:</p>
<pre><code class="language-shell">$ time node -e 'process.exit()'
node -e 'process.exit()'  5.94s user 0.16s system 99% cpu 6.157 total
</code></pre>
<p>From the above, I'm going to <em>gingerly assume</em> training a <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network">convolutional neural network</a> on this ARMv6-based single-board computer would be a fool's errand.  But that's not why you'd buy a Pi Zero W, or build anything with it.  This is why:</p>
<ul>
<li>It's ten bucks.</li>
<li>It's smaller than a credit card in two out of the three dimensions which count.</li>
<li>It's ten (10) dollars, USD.</li>
<li>With some effort and more cheap hardware, it can be <a href="https://hackaday.io/project/9455-poepi-pi-zero-power-over-ethernet-with-phy">powered via ethernet</a>.</li>
<li>It exposes GPIO pins.  Go nuts.</li>
<li>Did I mention it's $10?</li>
</ul>
<p>Once we've got an RPi to work with, we'll need a camera.</p>
<blockquote>
<p>What about <em>Brand X</em> single-board computer?</p>
<p>The Node.js code leverages the <a href="https://npm.im/raspicam">raspicam</a> package, which is a wrapper around <code>raspistill</code>.  So, if it can't run <code>raspistill</code>, we can't use it for this tutorial.</p>
</blockquote>
<h2 id="thecamera">The Camera</h2>
<p>A supported module based on OV5647 (&quot;v1&quot;; <a href="https://cdn.sparkfun.com/datasheets/Dev/RaspberryPi/ov5647_full.pdf">datasheet</a>) or IMX219 (&quot;v2&quot;; <a href="http://img.filipeflop.com/files/download/Datasheet_IMX219_20140910.pdf">datasheet</a>) will work.  There are &quot;official&quot; modules which can run up to $30, but I've seen a knockoff &quot;v1&quot; from China around $6 on the low end.  <strong>You don't need an 8MP camera</strong> to do this; we'll be taking rather low-resolution photographs.</p>
<p>These cameras are equipped with <em>fixed-focus</em> lenses.  I've found that you want to position the camera no less than about 12&quot; (30.48 cm) from the target (another option may be attaching a zoom lens).  I'll leave this as exercise to the reader, but here's my solution:</p>
<p><img src="https://boneskull.com/content/images/2017/09/IMG_4009.jpg" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<p>The camera module connects to the RPi via <a href="https://en.wikipedia.org/wiki/Flexible_flat_cable">flexible flat cable</a> to a <a href="https://en.wikipedia.org/wiki/Zero_insertion_force">ZIF</a> socket.  A RPi Zero supports a cable of width 11.5mm, but the <em>other</em> interfaces expect a width of ~16mm.  Adapters and conversion cables exist; one such cable comes with the <a href="https://www.raspberrypi.org/products/raspberry-pi-zero-case/">official case</a>.</p>
<blockquote>
<p>Building with LEGO?</p>
<p>For those attempting to build a custom tripod with LEGO, I note that the dimensions of my &quot;v1&quot; camera module are (in one dimension, anyway) roughly 24mm, which corresponds to a length of 3L, or the length of a <a href="https://rebrickable.com/parts/3623/plate-1-x-3/">3623 plate</a>.  1 x 5 Technic plates <a href="https://rebrickable.com/parts/32124/technic-plate-1-x-5-with-smooth-ends-4-studs-and-centre-axle-hole/">32124</a> and <a href="https://rebrickable.com/parts/2711/technic-plate-1-x-5-with-toothed-ends-2-studs-and-center-axle-hole/">2711</a> are helpful here, as well as <a href="https://rebrickable.com/parts/32028/plate-special-1-x-2-with-door-rail/">32028</a> to secure the module in place.</p>
</blockquote>
<p>Now that we have the basic hardware together, let's get Node.js installed.</p>
<h2 id="thenodejs">The Node.js</h2>
<p>I'm going to assume you've got <a href="https://raspbian.org/">Raspbian</a> Jessie installed.  Theoretically, any distro based upon Debian Jessie should work.  Maybe others too, but I haven't tried them!</p>
<p>For this project, we're using Node.js 8 (version 7.x may work with certain command-line flags, but I haven't tried it).  Normally, I'll grab binaries from <a href="https://github.com/nodesource/distributions">NodeSource</a>.  However, they don't support ARMv6.</p>
<blockquote>
<p>If you are using a RPi 3, go right ahead and use NodeSource's distributions, then skip to the next section.</p>
</blockquote>
<p>But for the Zero, you have several options, two of which I can recommend:</p>
<ol>
<li>
<p>Manually install a tarball <a href="https://nodejs.org/en/download/current/">from nodejs.org</a>; as a superuser, untar the archive and extract it over <code>/usr</code> or <code>/usr/local</code>, <em>or</em></p>
</li>
<li>
<p>My preferred method: install via <a href="https://github.com/creationix/nvm">Node Version Manager</a>.  As a <em>normal user</em> (e.g. <code>pi</code>), follow the instructions on the site and in the terminal to install NVM.  Then, run:</p>
<pre><code class="language-bash">$ nvm install 8
</code></pre>
<p>This will install the latest version of Node.js 8 under your home directory, then enable it.  Run <code>node -v</code> to test your install.</p>
</li>
</ol>
<p>The next piece of the puzzle is an API key.</p>
<h2 id="thecloud">The Cloud</h2>
<p>This project uses IBM's <a href="https://www.ibm.com/watson/services/visual-recognition/">Watson Visual Recognition</a> (hereafter &quot;WVR&quot;).  It's available from within IBM's PaaS, Bluemix (<a href="https://wikipedia.org/wiki/Bluemix">wiki</a>).</p>
<p>Use may use an existing Bluemix login, or sign up <a href="https://console.bluemix.net/catalog/services/visual-recognition">here</a>.  Once you're logged in, from <a href="https://console.bluemix.net/catalog/services/visual-recognition">the same page</a>, create a service instance; name it whatever you like.</p>
<p>After it's ready, you'll land on the dashboard for the instance. Here, you can find your API key:</p>
<ol>
<li>Click &quot;Service credentials&quot;.</li>
<li>Click &quot;View credentials&quot; under &quot;Actions&quot;.</li>
<li>Copy the API key and paste it somewhere safe (like a password manager app) to keep it handy.</li>
</ol>
<p>Armed with our API key, let's take a short detour into concepts.  I promise this won't hurt.</p>
<h2 id="theconcepts">The Concepts</h2>
<p>You'll need to know this stuff or you will be arrested by the police.</p>
<h3 id="theclass">The Class</h3>
<p>The most important concept you need to understand is the &quot;class&quot;.  In fact, the picture on the WVR site illustrates this well:</p>
<p><img src="https://boneskull.com/content/images/2017/09/Screenshot-2017-09-05-16.37.46.png" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<p>In the picture above, we have five (5) classes:</p>
<ol>
<li>Green: the subject of the image is green</li>
<li>Leaf: the subject of the image contains a leaf</li>
<li>Plant stem: The subject contains a plant stem</li>
<li>Herb: the subject of the image is in the &quot;herb&quot; category of plants</li>
<li>Basil: the subject is specifically a basil herb</li>
</ol>
<p>It's important to note that a class may be as narrow or broad as you wish.  For example, there are <em>many</em> shades of the color &quot;green&quot;--but only one plant named &quot;basil&quot;!</p>
<p>While WVR has some pre-existing classes which work out-of-the-box, our aim is to <em>create our own custom classes</em>.</p>
<p>To do this, we will need to create a <em>classifier</em>.</p>
<h3 id="theclassifier">The Classifier</h3>
<p>A &quot;classifier&quot; can be thought of as a logical <em>collection</em> of classes.  For example, say you had four friends and family you wanted to be able to recognize the faces of.  Each individual could correspond to a &quot;class&quot;:</p>
<ol>
<li>Uncle Snimm</li>
<li>Aunt Butters</li>
<li>Sister Clammy</li>
<li>Bill</li>
</ol>
<p>The classifier would be &quot;faces of friends &amp; family&quot;, or something of that nature.  Perhaps you would add another class to this classifier which was only &quot;family&quot;--you could re-use the same images.</p>
<p>In addition to this, WVR allows have a <em>single</em> special class within your classifier representing <em>images which are not in the classifier</em>.  For example, you could put images of random strangers (or your enemies) in this &quot;negative&quot; class.  This helps the underlying network avoid false positives.</p>
<blockquote>
<p>If you don't have any enemies to use for this project, I can provide a few pointers on how to acquire them.  I'll save that for a future post.</p>
</blockquote>
<p>More use-cases of classifiers include:</p>
<ul>
<li>By limiting the scope of the classes to which WVR compares an image, we increase the likelihood of a good match</li>
<li>Similarly If we know our picture won't be in classifier <em>X</em>, then we don't need to classify using classifier <em>X</em></li>
<li>Limiting scope will increase performance (though I don't know by how much--seems logical, however!)</li>
</ul>
<p>So, how do we create classes and classifiers?</p>
<h2 id="thetrainingregimen">The Training Regimen</h2>
<p>When we create a class, we give WVR an archive (a <code>.zip</code> file) of images.  These images are <em>positive examples</em> of class members.  Once this archive is uploaded, the <em>training</em> process begins.  Training is a process of &quot;learning&quot; in &quot;machine learning&quot;.  Depending on the number of images in your archive(s), this can take a little while (on the order of minutes for just a paucity of images).</p>
<blockquote>
<p>Remember, you can also supply your new classifier a single <code>.zip</code> archive of negative examples.</p>
</blockquote>
<p>In other words, in WVR, the action of <em>creating</em> a classifier implies <em>training</em> it as well.</p>
<p>Now, for the payoff.  Once we have trained a classifier, we get to classify images!</p>
<h2 id="theclassification">The Classification</h2>
<p><em>Classification</em> is the action of providing WVR <em>one or more images</em> to a classifier, and receiving information about how well each image might &quot;belong&quot; to its classes.</p>
<p>For each image, WVR will give you zero or more classes with a corresponding fraction between 0 and 1.  This fractional number represents <em>confidence</em>, not <em>accuracy</em>.  Then, for some classifiers, a confidence for class <em>X</em> of <em>0.6</em> could imply &quot;member of class <em>X</em>&quot;, but for others it could disqualify an image completely.</p>
<blockquote>
<p>If WVR's confidence drops below a certain threshold, it won't return a number at all.  This threshold is configurable; the default is 0.5.  If you're only using 10-50 images, you may want to drop it to 0.3-0.4.</p>
</blockquote>
<p>Let's recap the four terms we need to know:</p>
<ul>
<li><strong>Class</strong>: A set of images having a common attribute which we intend to recognize</li>
<li><strong>Classifier</strong>: A logical collection of classes</li>
<li><strong>Classification</strong>: Using WVR to decide which class(es) an arbitrary image could &quot;belong&quot; to, by reporting a confidence level</li>
<li><strong>Training</strong>: In WVR, we train a classifier; we provide images to the service which we will then use for classification</li>
</ul>
<p>What classifiers will <em>you</em> create?  Wait--before you answer--let me rain on your parade.  I'll tell you what I wanted to do until reality sunk in.  <em>Gather 'round and weepe, while I bid mine own tale of woe!</em></p>
<h2 id="thetaleofwoe">The Tale of Woe</h2>
<p>I like LEGOs.  Inspired by Jacques Mattheij's <a href="https://jacquesmattheij.com/sorting-two-metric-tons-of-lego">LEGO sorting project</a>, I wanted to see if I could easily spin up an accurate classifier for different categories of LEGO pieces.  For example, could I recognize &quot;plates&quot;:</p>
<p><img src="https://boneskull.com/content/images/2017/09/9F4CA855-4358-45D9-A996-5BB98DC718B6.png" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<p>versus &quot;bricks&quot;?</p>
<p><img src="https://boneskull.com/content/images/2017/09/8A426322-2D77-454B-B241-C7611970DAAE.png" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<p>Could I do this?  No. Of course not.  The long answer:</p>
<p>Once I had a working PoC of my tool (see below), I took many, <em>many</em> pictures of LEGO bricks, plates, etc.  They looked something like this:</p>
<p><img src="https://boneskull.com/content/images/2017/09/D25F21BD-EBFD-4417-BEB8-43FAD2855EF3.png" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<p>But the classification worked poorly.  I tried a lot of different things, such as removing color information, changing backgrounds:</p>
<p><img src="https://boneskull.com/content/images/2017/09/319FFF1A-34F9-4040-A08C-B54B514852CE.png" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<p>Or fiddling with the color temperature:</p>
<p><img src="https://boneskull.com/content/images/2017/09/DD891E8B-68DC-42D7-8804-9B5C73DC20EF.png" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<p>Soul-crushing, abject failure. Every. Time.</p>
<p>One thing I <em>did</em> keep was a lower resolution--high resolution images will not necessarily net better results!  In fact, often the opposite: a higher-resolution image will potentially contain an <em>unnecessary level of detail</em>, resulting in <em>extra useless information</em>.</p>
<p>Like usual, I pondered on &quot;useless information&quot;.</p>
<p>Look at the previous image.  Its resolution is 428x290; multiply and we get <code>124120</code> pixels.  If we rotate it slightly, then crop down to the relevant information, we get:</p>
<p><img src="https://boneskull.com/content/images/2017/09/894E8A6E-BE58-4E86-949E-58560354D7B3.png" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<p>That's 20x202 or <code>4040</code> pixels.  So:</p>
<pre><code>4040 / 124120 = ~0.0325
0.0325 * 100 = ~3.25
</code></pre>
<p>That means a bit over 3% of the photos I was taking contained relevant information.  It follows that 97% of each photo was <em>useless, wasteful trashpixels</em>.</p>
<p>Remember, the RPi cameras are fixed-focus.  If I had a better camera or and/or macro lens, I probably could have made this work.  <em>Alas!</em></p>
<p>LEGOs were too small.  I needed something larger; something with fewer important details.</p>
<p>My eyes darted around the room.  What would be a good size for a picture taken about 12&quot; away?  Maybe kitchen utensils?  Cups?  That seems boring.  Regrets?  What do I have a lot of... (I realize you can't answer this)?</p>
<p>Maybe you have a few of these around:</p>
<p><img src="https://boneskull.com/content/images/2017/09/26918193-2C21-4978-9F0B-FD05CB29DF0D.png" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<h2 id="wallwarts">Wall Warts!</h2>
<p>If you're into hobby electronics, you might actually <em>collect</em> <a href="https://wikipedia.org/wiki/AC_adapter">wall warts</a>.  I have ...a few extras.</p>
<p><img src="https://boneskull.com/content/images/2017/09/8-9corqrRY6-gHbp89QI2Q_thumb_5c48.jpg" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<p>You may not have, say, 20 or 30 of these handy (without having to, you know, unplug stuff).  But <em>I</em> do.  If you can put aside your envy, you'll notice the signal-to-noise ratio improves dramatically:</p>
<p><img src="https://boneskull.com/content/images/2017/09/D23566CC-8CBB-4F66-B9F0-D7B4865456F6.png" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<p>The images are still a bit blurry, but it doesn't matter--we're not trying to read the fine print.</p>
<p>Also, scavening similar-sized objects for a &quot;negative example&quot; class was  almost enjoyable:</p>
<p><img src="https://boneskull.com/content/images/2017/09/499C19A6-678C-4852-A9B0-5984A2639536.png" alt="DIY Object Recognition with Raspberry Pi, Node.js, & Watson"></p>
<p>I settled on a resolution of 640x480, and chose to discard color information.  See  the end of this post for links to my class archives, if you'd like to try them yourself!</p>
<blockquote>
<p>Given wall warts are usually black, maybe I would have better results if I kept the color data???</p>
</blockquote>
<p>I can offer some general advice for taking your own snapshots:</p>
<ol>
<li>Keep the signal-to-noise ratio high; don't include unnecessary pixels!</li>
<li>Color temperature, shadows, lighting--the less consistent, the more images you'll need.</li>
<li>Don't worry too much about blurriness (<a href="https://wikipedia.org/wiki/Optical_character_recognition">OCR</a> this ain't)</li>
<li>Consider different placements and angles of your objects</li>
<li>50 images per class or more.  WVR's lower limit is 10 per, but 50 is recommended as the absolute minimum!</li>
<li>Even a &quot;low&quot; confidence level can work in practice.  Adjust your threshold; as long as the network is <em>more confident</em> when you expect it to be, then you're doing fine!</li>
</ol>
<p>To help me:</p>
<ol>
<li>Take all these pictures,</li>
<li>Put them in the correct buckets,</li>
<li>Archive them, and</li>
<li>Upload them to Watson,</li>
</ol>
<p>I ended up writing a tool.  That tool is called <a href="https://npm.im/puddlenuts">puddlenuts</a>.  No, really.</p>
<h2 id="introducingpuddlenuts">Introducing puddlenuts</h2>
<p><a href="https://npm.im/puddlenuts">puddlenuts</a> is what I wrote to ease the insufferable process of <em>taking hundreds of pictures</em>.</p>
<blockquote>
<p>Don't freak.  You don't need to take them all at once!  You can always add more images to a class later.  This is called <em>retraining</em>.  <code>puddlenuts</code> can help with this.</p>
</blockquote>
<p>At this point, you should have your RPi configured, with Node.js installed and camera connected.  If you don't, what is wrong with you?</p>
<p>On your RPi, install <code>puddlenuts</code>, then go mow the lawn while you wait:</p>
<pre><code class="language-text"># this may require `sudo` if you aren't using NVM
$ npm install --global puddlenuts
- [ ] # ... time passes ...
+ puddlenuts@0.2.4
added 245 packages in 488.451s
</code></pre>
<p><code>puddlenuts</code> isn't a library; it's a command-line tool.  What can it do?</p>
<pre><code class="language-text">$ puddlenuts --help

Commands:
  classify [..classifier]         Classify an image against one
                                  or more classifiers by a
                                  snapshot or existingimage.
                                  Default is to run against all
                                  classifiers.
  shoot &lt;classifier&gt; &lt;classes..&gt;  Take snapshots to train
                                  classifier with two (2) or
                                  more positive example classes,
                                  OR one (1) or more positive
                                  example classes, and one (1)
                                  negative example class (see
                                  &quot;-n&quot;)
  train &lt;classifier&gt;              Train Watson with existing
                                  .zip archives

IO
  --color     Enable color output, if available
                                       [boolean] [default: true]
  --loglevel  Logging level
  [choices: &quot;error&quot;, &quot;warn&quot;, &quot;info&quot;, &quot;debug&quot;, &quot;silly&quot;] [default:
                                                         &quot;info&quot;]
  --debug     Shortcut for '--loglevel debug'
                                      [boolean] [default: false]

Watson
  --api-key  Set PUDDLENUTS_API_KEY env var instead!
                                             [string] [required]

Options:
  --help  Show help                                    [boolean]
</code></pre>
<p>We want to take photos, so <code>shoot</code> is the command we want.</p>
<h3 id="shoot">Shoot</h3>
<p>Here's the dirt on <code>shoot</code>:</p>
<pre><code class="language-text">$ puddlenuts shoot --help
puddlenuts shoot &lt;classifier&gt; &lt;classes..&gt;

Camera control
  --raspistill, -r   Options for raspistill in dot notation
                     (e.g. &quot;-r.width 640 -r.height 480&quot;)
                                                       [default:
           {&quot;width&quot;:640,&quot;height&quot;:480,&quot;quality&quot;:100,&quot;timeout&quot;:1}]
  --limit, -l        Limit to this many snapshots per class
                                          [number] [default: 50]
  --delay, -d        Delay between snapshots in ms
                                   [number] [default: 3000 (3s)]
  --class-delay, -D  Delay between classes in ms
                                 [number] [default: 10000 (10s)]
  --trigger, -t      Set trigger interrupt on this GPIO pin (RPi
                     only)        [number] [default: No trigger]

Watson
  --api-key  Set PUDDLENUTS_API_KEY env var instead!
                                             [string] [required]
  --retrain  Retrain classifier (if exists)
                                      [boolean] [default: false]
  --dry-run  Don't actually upload anything
                                      [boolean] [default: false]

Class
  --negative, -n  Include negative example class in training
                  (will be final class)
                                      [boolean] [default: false]

IO
  --color     Enable color output, if available
                                       [boolean] [default: true]
  --loglevel  Logging level
  [choices: &quot;error&quot;, &quot;warn&quot;, &quot;info&quot;, &quot;debug&quot;, &quot;silly&quot;] [default:
                                                         &quot;info&quot;]
  --debug     Shortcut for '--loglevel debug'
                                      [boolean] [default: false]

Options:
  --help  Show help                                    [boolean]

Examples:
  blueface/bin/puddlenuts.js shoot  Take snapshots to train or
  dogs poodles -n --retrain         retrain the &quot;dogs&quot;
                                    classifier, with a positive
                                    example set of &quot;poodles&quot; and
                                    a negative example set (i.e.
                                    non-dogs); upload to Watson
  blueface/bin/puddlenuts.js shoot  Take snapshots to train (do
  fish catfish swordfish --dry-run  not retrain if &quot;fish&quot;
                                    exists&quot;) the &quot;fish&quot;
                                    classifier with positive
                                    examples of &quot;catfish&quot; and
                                    &quot;swordfish&quot;; don't upload
</code></pre>
<p>The &quot;camera control&quot; options will allow you granular control over <a href="https://www.raspberrypi.org/documentation/usage/camera/raspicam/raspistill.md">raspistill</a>, which is the official command-line interface for the RPi cam.  This is how you can change the resolution, fiddle w/ color correction, silly effects, etc.</p>
<p>These options also allow you to define <em>how many pictures to take</em> and <em>how quickly to take them</em>.  After each picture is taken, there's a short pause.  I found a delay (<code>--delay</code>) of less than three (3) seconds between pictures isn't quite enough time to comfortably switch an object out for another, or readjust, so this is the default.</p>
<p>Since you tell <code>puddlenuts</code> to take snaps for multiple classes, you can also tell it how long to pause between switching from the last picture of one class to the first picture of the next.  I was taking a bit longer to get setup when the class changed (e.g., swapping my pile of wall warts for a pile of random, non-wall-wart objects)--this defaults to ten (10) seconds.</p>
<p>Finally, <code>--limit</code> will limit each class to exactly the number of images you provide it (minimum 10).</p>
<blockquote>
<p>The <code>--trigger</code> option allows you to wire a switch to one of the RPi's GPIOs.  If the GPIO is &quot;high&quot;, snaps will be taken (with specified delays).  But if it's &quot;low&quot;, <code>puddlenuts</code> will pause until you flip the switch back &quot;high&quot; again.  Neat!</p>
</blockquote>
<p>I realize this first example might get me some unintended search engine traffic, but here we go:</p>
<pre><code class="language-bash">$ puddlenuts shoot dogs poodles --negative --retrain
</code></pre>
<p>But what the above command will do, in gory detail, is:</p>
<ol>
<li>Take 50 pictures of &quot;poodles&quot;, with a 3s delay between each</li>
<li>Pause 10s</li>
<li>Take 50 pictures of &quot;not dogs&quot;, with a 3s delay between each</li>
<li>Create <code>.zip</code> archives for each set of 50</li>
<li>If the &quot;dogs&quot; classifier doesn't exist, it gets created</li>
<li>If the &quot;poodles&quot; class doesn't exist, it gets created/trained</li>
<li>If the &quot;poodles&quot; class <em>does</em> exist, the 50 images are used for more training</li>
<li>If the &quot;negative examples&quot; (&quot;not dogs&quot;) class doesn't exist, it gets created/trained</li>
<li>If the &quot;negative examples&quot; class <em>does</em> exist, the 50 images are used for more training</li>
</ol>
<p>You'll also see plenty of beautiful console output while this is happening.</p>
<p>There's certainly room for improvement here; try it out and <a href="https://github.com/boneskull/puddlenuts/issues/new">let me know what could be easier</a>.</p>
<h3 id="train">Train</h3>
<p>Execute <code>puddlenuts train --help</code> for more information, as I realize it's silly to copy and paste the output here.</p>
<p>The <code>train</code> command allows you create (or retrain) classes using existing <code>.zip</code> archives.  <strong>It doesn't take pictures</strong>.</p>
<p>For example, if you have to cobble together several &quot;shoot&quot; runs (use <code>puddlenuts shoot --dry-run</code> to create <code>.zip</code> files w/o uploading; see log output for their location), or need to collect some images via other means, you should use <code>puddlenuts train</code>.</p>
<h3 id="classify">Classify</h3>
<p>This is the &quot;fun&quot; command—it will take a picture and attempt to classify it against the classifier(s) you provide.</p>
<p>If you <em>don't</em> provide a classifier, the image will be compared against <em>all</em> classifiers.  Watson provides a &quot;default&quot; classifier, which may be of use—give it a shot and see.</p>
<p>Two more options of note:</p>
<ul>
<li>You can also tell <code>puddlenuts classify</code> to just upload a file (via the <code>--input &lt;path/to/file&gt;</code> option) instead of take a picture.</li>
<li>You can specify the confidence threshold with <code>--threshold &lt;number between 0 and 1 inclusive&gt;</code>.  You <em>probably</em> don't want to set this to <code>0</code> or <code>1</code>, as the former will give you way too much information, and the latter will give you <em>diddly squat</em>.</li>
</ul>
<p>What this command provides is a pretty-printed data structure with the classification information.  This is an unwieldy tree, and I wasn't sure how to better distill and/or represent it.  So you just get a dump.  You must admit, it's really all you deserve.  Regardless, please <a href="https://github.com/boneskull/puddlenuts/issues/new">let me know</a> if you have a better idea.</p>
<p>For the conclusion, let's stop.</p>
<h2 id="conclusion">Conclusion</h2>
<p>A novice consumer of ML API's may trip up or become frustrated when a system doesn't do what you expect.  You must remember that bringing this kind of power down to &quot;our&quot; level will come with caveats.  There are limitations in what these shrinkwrapped solutions can offer, but with some persistence, I believe these technologies are widely applicable.</p>
<p>It's my hope you learn from my mistakes (and I hope I learn from them as well).  All things considered, it's <em>way easier than I would have expected</em> to get started with this stuff.  And cheaper.  It's trivial (JavaScript) to do <em>more</em> (computer vision) with <em>less</em> ($10 computers).</p>
<p>My prediction is this trend will continue.  In a future post, I'll explain how to do nearly everything using almost nothing.</p>
<h2 id="addendum">Addendum</h2>
<p>Below are links to the images I used for my &quot;wall warts&quot; classifier.  There are only two classes:</p>
<ul>
<li><a href="https://www.dropbox.com/s/3wltzy6cgd5013l/wall-warts.zip?dl=1">Positive examples (direct download)</a> (wall warts)</li>
<li><a href="https://www.dropbox.com/s/4luwxw13ph4jh16/not-wall-warts.zip?dl=1">Negative examples (direct download)</a> (not wall warts)</li>
</ul>
<p>And here's my <a href="https://www.slideshare.net/boneskull/diy-object-recognition">slide deck</a> associated with a <a href="https://www.meetup.com/JavaScript-and-the-Internet-of-Things/events/242563727/">talk</a> I gave on this subject at the <a href="https://www.meetup.com/JavaScript-and-the-Internet-of-Things/">JavaScript &amp; the Internet of Things</a> meetup in Portland, Oregon, on August 22 2017.</p>
]]></content:encoded></item><item><title><![CDATA[How to Abuse TypeScript Definitions for Better Code Assistance in JavaScript Projects]]></title><description><![CDATA[This hack provides some value from the TypeScript ecosystem without having to commit to TypeScript.]]></description><link>https://boneskull.com/typescript-defs-in-javascript/</link><guid isPermaLink="false">59aeffbf3e34ee0b40f35a84</guid><category><![CDATA[jetbrains]]></category><category><![CDATA[typescript]]></category><category><![CDATA[node.js]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Thu, 07 Sep 2017 15:00:00 GMT</pubDate><media:content url="https://boneskull.com/content/images/2017/09/265900118_62aa02e262_b.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://boneskull.com/content/images/2017/09/265900118_62aa02e262_b.jpg" alt="How to Abuse TypeScript Definitions for Better Code Assistance in JavaScript Projects"><p><em>So there I was</em>, writing a CLI tool in <em>plain old JavaScript</em>.  I had pulled in <a href="https://npm.im/yargs">yargs</a>, as I often do.</p>
<p>Here's the source of my executable:</p>
<p><img src="https://boneskull.com/content/images/2017/09/2595494D-9BC1-4EE8-B765-FF8FCAA959F2.png" alt="How to Abuse TypeScript Definitions for Better Code Assistance in JavaScript Projects"></p>
<p>I'm looking at the above and wondering why <code>demandCommand</code> is highlighted as an instance member function, but <code>usage</code>, <code>command</code>, <code>help</code> etc. are not.</p>
<p>I note that when typing the call to <code>usage()</code>, I am not offered any information about the parameters or the types thereof.</p>
<h2 id="webstormtypescript">WebStorm &amp; TypeScript</h2>
<p>Some time ago, I had noticed <a href="https://www.jetbrains.com/webstorm/">WebStorm</a> will pull in type definitions from 3rd-party libraries which ship with <a href="http://typescriptlang.org">TypeScript</a> definitions. WebStorm will use those definitions to improve highlighting, provide type hints, inline docs, better refactoring, etc--<em>even if</em> I'm not actually using TypeScript in my project.</p>
<p>One day, I decided to see what would happen if I simply added <a href="https://npm.im/@types/yargs">@types/yargs</a> as a dev dependency:</p>
<pre><code class="language-sh">$ npm i @types/yargs -D
</code></pre>
<p>I ran the above, then reloaded my source.  Imagine my surprise when I saw:</p>
<p><img src="https://boneskull.com/content/images/2017/09/236ECEF6-31DE-497A-82FA-FDD6A9460A8B.png" alt="How to Abuse TypeScript Definitions for Better Code Assistance in JavaScript Projects"></p>
<p>Huh.  What if...</p>
<p><img src="https://boneskull.com/content/images/2017/09/46B5AFB8-F713-4BCE-A9D8-6526ACFE485B.png" alt="How to Abuse TypeScript Definitions for Better Code Assistance in JavaScript Projects"></p>
<p>Wow.</p>
<h2 id="thatscoolbut">That's Cool, But...</h2>
<p>... but I don't really want to install a bunch of <code>@types/*</code> as development dependencies.  The modules aren't particularly heavy.  Yet, stuffing this in the <code>package.json</code> of a <em>pure JavaScript project</em>:</p>
<pre><code class="language-json">{
  &quot;devDependencies&quot;: {
    &quot;@types/yargs&quot;: &quot;^8.0.2&quot;
  }
}
</code></pre>
<p>...provides <em>no functional value</em> whatsoever.</p>
<p>Furthermore, even if this works in WebStorm, I dunno if it works in VS Code, Atom, Sublime, etc.  I don't expect contributors to use the same IDE or editor I do.  A dotfile is one thing, but we're talkin' 'bout a <em>dependency</em>.</p>
<p><img src="https://images.unsplash.com/photo-1498955472675-532cdee9d6b4?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;s=35831157e5fe704d98d1b43f5511b5c0" alt="How to Abuse TypeScript Definitions for Better Code Assistance in JavaScript Projects"><br>
<small>External library—get it? <a href="https://unsplash.com/@leti389?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Laëtitia Buscaylet</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></small></p>
<h2 id="thehackexternallibraries">The Hack: External Libraries</h2>
<p>WebStorm allows you to add arbitrary &quot;external libraries&quot; to your project.<br>
This strategy is how WebStorm manages to provide code assistance for Node.js core modules, ES5, ES2015+, the DOM API, WebGL, etc.  These are enabled by default, as well as the local <code>node_modules/</code> dir.</p>
<p>If you reference a lib in a <code>&lt;script&gt;</code> tag which points to a CDN, WebStorm will complain that it doesn't know anything about that script; it prompts you to download the script and add it as an external library.</p>
<p>But this can point to a directory or file on disk.  So I removed <code>@types/yargs</code> from my project, and installed it <em>globally</em>:</p>
<pre><code class="language-sh">$ npm i -g @types/yargs
</code></pre>
<p>I then went into the &quot;external libraries&quot; settings of my project in WebStorm, and added the destination directory (for me, this was <code>/usr/local/lib/node_modules/@types/yargs</code>; yours may be different).  I marked this library as &quot;global&quot;, so that I could use it in any project I open in WebStorm.</p>
<p>I then configured my project to actually <em>use</em> the library (via something called &quot;Manage Scopes&quot;, which <em>isn't the same thing</em> as the other &quot;Scopes&quot; in WebStorm).</p>
<p>Sure enough, this enabled the extra code assistance—even if <code>@types/yargs</code> was no longer a dev dependency.</p>
<h2 id="makingitsuckless">Making It Suck Less</h2>
<p>This was dope and sick, but it also meant I would need to install the definitions I wanted manually, and then keep them up-to-date, which was not dope nor sick.</p>
<p>I knew that all of the type definitions live in a <a href="https://github.com/DefinitelyTyped/DefinitelyTyped">single repo</a>.</p>
<p>I tried to install <em>that</em> globally, but there is no <code>definitely-typed</code> package (<a href="https://npm.im/definitely-typed">see for yourself</a>).  I tried to install the Git repo globally, which worked:</p>
<pre><code class="language-sh">$ npm install --global \
    https://github.com/DefinitelyTyped/DefinitelyTyped.git
</code></pre>
<p>But upon closer inspection of <code>/usr/local/lib/node_modules/definitely-typed</code>, it was not actually a Git working copy.  This means <code>git pull</code> wouldn't just work.</p>
<p>Still, if I <em>had</em> a working copy, I could just symlink its <a href="https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types"><code>types</code></a> dir to <code>/usr/local/lib/node_modules/@types</code>...</p>
<h2 id="thealltypespackage">The <code>all-types</code> Package</h2>
<p>...which is exactly what I automated with <a href="https://npm.im/all-types">all-types</a>.</p>
<p>The <a href="https://github.com/boneskull/all-types/blob/master/README.md"><code>README</code></a> has detailed instructions about how to install, set things up in WebStorm, and keep the install up-to-date.</p>
<blockquote>
<p>If you pay attention to nothing else in the <code>README</code>, I suggest that you <strong>don't add all of DefinitelyTyped at once</strong>.  Not because of performance--rather, it just adds a lot of noise to the code assistance environment.</p>
</blockquote>
<p>I'm interested in making this easily consumable in other editors &amp; IDEs, if possible.  If you find your editor of choice can do this, please contribute with instructions!</p>
<h2 id="somethingfornothing">Something for Nothing?</h2>
<p>Let's agree that a great thing about TypeScript is the code assistance it provides to editors and IDEs.  Even if you abhor TypeScript, that's tough to argue with.  But what if you got that for free?</p>
<p><strong>This hack provides <em>at least some</em> of the value from the TypeScript ecosystem <em>without having to commit</em> to TypeScript.</strong></p>
<p>Yet, the most obvious deficiency is that unless your <em>own</em> code is written in TypeScript, this is limited.  But it's much, <em>much</em> better than nothing.</p>
<p>It's worth mentioning that editors &amp; IDEs are free (as in beer).  Do they work with <a href="https://npm.im/all-types">all-types</a>?  I have no idea.  Clue me in!</p>
<blockquote>
<p>Title image by <a href="https://www.flickr.com/photos/oskay/265900118/">Windell Oskay</a>; part of a series of LEGO Abominations</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[From Jekyll + GitHub Pages to Ghost]]></title><description><![CDATA[Why I moved my blog from Jekyll and GitHub Pages to a self-hosted Ghost instance.]]></description><link>https://boneskull.com/from-jekyll-to-ghost/</link><guid isPermaLink="false">59ab33fd262b121c68947894</guid><category><![CDATA[ghost]]></category><category><![CDATA[meta]]></category><dc:creator><![CDATA[Christopher Hiller]]></dc:creator><pubDate>Sun, 03 Sep 2017 00:55:44 GMT</pubDate><media:content url="https://boneskull.com/content/images/2017/09/Screenshot-2017-09-02-18.00.16-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://boneskull.com/content/images/2017/09/Screenshot-2017-09-02-18.00.16-1.png" alt="From Jekyll + GitHub Pages to Ghost"><p>While trying to get going on a new blog post, I found myself frustrated with GitHub Pages' <a href="https://help.github.com/articles/using-jekyll-as-a-static-site-generator-with-github-pages/">Jekyll sandbox</a>.</p>
<p>My previous site theme was hand-rolled (I bet you'd never guess!) from a <a href="https://getmdl.org">Material Design Lite</a> template.  Admittedly, this was hubris.  I am ignorant about &quot;responsive&quot; design (does that actually <em>mean</em> anything?), PWAs, accessibility, et cetera.</p>
<p>Leveraging Jekyll plugins would ultimately mean <em>more overhead</em> to publish my site, because I could not rely on GitHub to generate it for me.  So I threw in the proverbial towel, and decided to look at other blog software.</p>
<h2 id="blogsoftwarerequirements">Blog Software Requirements</h2>
<ol>
<li>Has some sort of GUI.</li>
<li>Easy on the wallet.</li>
<li>Not WordPress.</li>
</ol>
<p>I hate the word &quot;blog&quot;.</p>
<h2 id="ipickedghost">I Picked Ghost</h2>
<p>I decided to self-host using <a href="https://ghost.org/">Ghost</a> for now.  What I like about Ghost:</p>
<ol>
<li>It's easy to get up &amp; running with a self-hosted instance.</li>
<li>Admin interface(s).  While I <em>generally</em> enjoy puttering around in my terminal and editor, this is not one of those times.  I just want to write.</li>
<li>Ghost is implemented in Node.js, so I can hack on it if I need to.</li>
<li>I could import my post (sic) from Jekyll pretty easily using <a href="https://www.npmjs.com/package/nodejs-jekyll-to-ghost">nodejs-jekyll-to-ghost</a>.</li>
<li>Not WordPress.</li>
</ol>
<p>Finally, Since I have no design sense to speak of, I threw down for some <a href="http://www.malvouz.com/">themes by Malvouz</a>, and tweaked one of them to be more boneskullish.</p>
<p>That's my story.</p>
]]></content:encoded></item></channel></rss>