Charles Engelke's Blog

July 13, 2014

Symmetric Cryptography in the Browser – Part 3

Filed under: Uncategorized — Charles Engelke @ 6:47 pm
Tags: ,

This post is part of a series on cryptography in the browser. Previous posts have used the new Web Cryptography API to create and manage AES keys, and encrypt and decrypt strings. Now we will read a file, encrypt or decrypt it, and allow the result to be saved back in a new file. The next post will finish this first part of the series (that deals with symmetric cryptography) by putting all the pieces so far together into a working web page.

We start with an AES key object called aesKey already created, and a File object sourceFile, perhaps from an HTML input element. We will end up with a URL resultUrl that can be used to fetch and download the encrypted or decrypted file.

Step 1 – Declare variable to hold the URL when created, and set up a FileReader object:

var resultUrl;
var reader = new FileReader();

Step 2 – Specify what should happen when the file has been read in:

reader.onload = encryptTheFile;


reader.onload = decryptTheFile;

depending on which operation you want to perform.

Step 3 – Trigger the file to be read as an ArrayBuffer (which can be easily converted to a Uint8Array for processing).


All the real work happens in encryptReaderResult or decryptReaderResult. They’re similar, but with some important differences. We need to create a random initialization vector for encryption and save it with the encrypted file, then extract it and use it later for decryption. A common convention is to write the 16 byte initialization at the start of the encrypted file, so it can be read first and used later for decryption. That’s what we’ll do.

function encryptReaderResult() {
    var iv = window.crypto.getRandomValues(new Uint8Array(16));
        {name: "AES-CBC", iv: iv},
        new Uint8Array(reader.result)
    then(function(result) {
        var blob = new Blob([iv, new Uint8Array(result)], {type: "application/octet-stream"});
        resultUrl = URL.createObjectURL(blob);
    catch(function(err) {
        alert("Encryption failed: " + err.message);

There are a couple of new things in this code. First, note the coercion of read.result (an ArrayBuffer because that’s what we asked the FileReader to provide) to an array of bytes that we can encrypt. Second, we are creating a Blob out of two byte arrays (iv and the result of the encryption) and specifying its content type as application/octet-stream. The browser gives us a URL for that blob, which we can put into a link in the page in order to download its contents.

The decryption is very similar, except instead of putting the iv together with the rest of the file, we start by separating it from the encrypted file:

function decryptReaderResult() {
    var iv = new Uint8Array(reader.result.slice(0, 16));
        {name: "AES-CBC", iv: iv},
        new Uint8Array(reader.result.slice(16))
    then(function(result) {
        var blob = new Blob([new Uint8Array(result)], {type: "application/octet-stream"});
        resultUrl = URL.createObjectURL(blob);
    catch(function(err) {
        alert("Decryption failed: " + err.message);

The new thing here is the use of the Blob.slice method to address different parts of an ArrayBuffer, so we can pull the iv out of the beginning of the file.

That’s all the pieces we need. Next time (sooner than a week from now, I hope) I’ll show a complete web page to perform these operations.

July 5, 2014

Symmetric Cryptography in the Browser – Part 2

Filed under: Uncategorized — Charles Engelke @ 12:31 pm
Tags: ,

This post is part of a series on cryptography in the browser. My last post covered the basics of encrypting and decrypting with the Web Cryptography API, but had no practical use. That’s because you couldn’t save and later load the key you used, and you couldn’t get meaningful amounts of data into and out of the software. We’ll address the first of those needs now. When we created our encryption key we set the exportable parameter to true. If we hadn’t, the actual key would forever be hidden from us, which would be a good idea if there were an outside-the-browser way to manage it. As of now, there isn’t such a way, so we’ll manage keys in the browser. That requires exporting them to a format that can be saved and transported and importing them from those formats. The format we’ll use is a hexadecimal string, so our 128 bit (16 byte) key will be a 32 character string. We can export a key to a byte array using the window.crypto.subtle.exportKey method. This is a pretty easy method to use. It takes two parameters: the format you want to export to, and the key to export. It returns a promise that passes the exported key (as an ArrayBuffer) to its then method’s parameter. Assuming our AES key is in the variable aesKey, here’s how to get it into a viewable form:

var aesKeyBytes;

window.crypto.subtle.exportKey('raw', aesKey).
then(function(result) {aesKeyBytes = new Uint8Array(result);}).
catch(function(err) {alert("Something went wrong: " + err.message);});

When I try that in my browser with a defined aesKey, I get the following bytes: [51, 155, 145, 34, 55, 159, 162, 158, 253, 202, 19, 78, 139, 186, 51, 118] So I can see the actual key, but I’d rather look at it in hex. For example, 51 is 33 in hex, 155 is 9b, 145 is 91, and so on. I can convert a Uint8ByteArray to a hexadecimal string by converting each byte and concatenating them:

function byteArrayToHexString(byteArray) {
    var hexString = '';
    var nextHexByte;
    for (var i=0; i<byteArray.byteLength; i++) {
        nextHexByte = byteArray[i].toString(16);  // Integer to base 16
        if (nextHexByte.length < 2) {
            nextHexByte = "0" + nextHexByte;     // Otherwise 10 becomes just a instead of 0a
        hexString += nextHexByte;
    return hexString;

Given the aesKey and aesKeyBytes shown above, byteArrayToHexString(aesKeyBytes) returns "339b9122379fa29efdca134e8bba3376", which I can easily display and save. Going the other way is pretty easy now. We will use the window.crypto.subtle.importKey method, after first converting a hex string to a byte array with this function:

function hexStringToByteArray(hexString) {
    if (hexString.length % 2 !== 0) {
        throw "Must have an even number of hex digits to convert to bytes";
    var numBytes = hexString.length / 2;
    var byteArray = new Uint8Array(numBytes);
    for (var i=0; i<numBytes; i++) {
        byteArray[i] = parseInt(hexString.substr(i*2, 2), 16);
    return byteArray;

Trying it out, hexStringToByteArray("339b9122379fa29efdca134e8bba3376") returns [51, 155, 145, 34, 55, 159, 162, 158, 253, 202, 19, 78, 139, 186, 51, 118], which is what I started with. Now that we have the array of bytes, we’re ready to import it to create a key. The importKey method takes all the same parameters as createKey, plus the key’s bytes and format of those bytes, so the steps to use them are almost the same:

var importedAesKey;

    "raw",                          // Exported key format
    aesKeyBytes,                    // The exported key
    {name: "AES-CBC", length: 128}, // Algorithm the key will be used with
    true,                           // Can extract key value to binary string
    ["encrypt", "decrypt"]          // Use for these operations
then(function(key) {importedAesKey = key;}).
catch(function(err) {alert("Something went wrong: " + err.message);});

Now that we can get keys in and out of our code in a human readable form, we’re ready to actually encrypt and decrypt files. The next post will encrypt or decrypt a File into a Blob that can be downloaded. Then we’ll put it all together into a complete web page that performs all these functions.

June 22, 2014

Symmetric Cryptography in the Browser – Part 1

Filed under: Uncategorized — Charles Engelke @ 2:07 pm
Tags: ,

I’m going to start exploring the Web Cryptography API with just about the simplest use case I can think of: symmetric encryption with AES. The user will select a file, have the browser encrypt it, and download the encrypted file. Or select an encrypted file, have the browser decrypt it, and download the plaintext.

The cryptography API is provided in the browser by the window.crypto object. Almost all of the functionality currently available is provided by methods of the window.crypto.subtle object, so named “to reflect the fact that many of these algorithms have subtle usage requirements in order to provide the required algorithmic security guarantees.” That means that this API will do the crypto right, but you’re on your own to use it properly in order to have a secure solution.

Encryption is performed with the encrypt method (guess what method does decryption), but we can’t just jump in and use it. The encrypt method takes a specification of the algorithm to use, a key, and the original plaintext as parameters, and returns a promise. The ciphertext is provided to the function given to that promise’s then method. So before we can do anything else, we need a key. For this API, keys are opaque objects that must be manipulated through API methods. You can’t just use a binary string. We can either import an existing key or generate a new one. We’ll do the latter first.

Keys are created using the generateKey method, which returns another promise. The first parameter describes the kind of key to generate, the second says whether or not you can extract the actual binary key from it, and the third is an array of the purposes of the key. So, to get a 128 bit key to use with the Advanced Encryption Standard in Cipher Block Chaining mode, we’d use:

var keyPromise = window.crypto.subtle.generateKey(
    {name: "AES-CBC", length: 128}, // Algorithm the key will be used with
    true,                           // Can extract key value to binary string
    ["encrypt", "decrypt"]          // Use for these operations

This will try to create a new, random key, and pass it to the promise’s then method. We will just save it for now:

var aesKey;   // Global variable for saving
keyPromise.then(function(key) {aesKey = key;});
keyPromise.catch(function(err) {alert("Something went wrong: " + err.message);});

Assuming nothing went wrong, we now have a key stored in the aesKey variable that we can use to encrypt and decrypt data. So let’s try it with some dummy data. The encrypt method takes three parameters. The key we just generated is one of them. The plaintext to encrypt is another. And the remaining parameter is an object that specifies the algorithm to use and provides any needed options for encryption. For AES-CBC that object has the name “AES-CBC” and a property called iv, which is the initialization vector.

I’m not going to get into how AES-CBC works, just that it operates on blocks of 128 bits (16 bytes) each, and you need to provide a random 16 byte (128 bit) chunk called the initialization vector for it to be secure. So let’s start by getting a random block of data using window.crypto.getRandomValues:

var iv = new Uint8Array(16);

Because getRandomValues returns the array it fills in, too, this could be written more consisely as:

var iv = window.crypto.getRandomValues(new Uint8Array(16));

Unlike most of this API’s operations, getRandomValues is not asynchronous so it doesn’t require a promise. It just gets some cryptographically strong data and puts it in the array provided. Now we could perform encryption, if we just had something to encrypt. Let’s just put some text in a string for that:

var plainTextString = "This is very sensitive stuff.";

Unfortunately, encrypt doesn’t take JavaScript strings, just blocks of memory, so we’ve got to copy the contents of the string to an array of bytes:

var plainTextBytes = new Uint8Array(plainTextString.length);
for (var i=0; i<plainTextString.length; i++) {
    plainTextBytes[i] = plainTextString.charCodeAt(i);

And now we can encrypt it, saving the ciphertext to a global variable:

var cipherTextBytes;
var encryptPromise = window.crypto.subtle.encrypt(
    {name: "AES-CBC", iv: iv}, // Random data for security
    aesKey,                    // The key to use 
    plainTextBytes             // Data to encrypt
encryptPromise.then(function(result) {cipherTextBytes = new Uint8Array(result);});
encryptPromise.catch(function(err) {alert("Problem encrypting: " + err.message);});

Note that I have to convert the result into a Uint8Array. That’s because the encrypt operation returns an ArrayBuffer, which is just a chunk of memory. If I want to see what’s in it, I have to have a real typed array.

When I run this in the JavaScript console on my Chrome browser I get the following values for cipherTextBytes:

[93, 197, 31, 64, 100, 122, 144, 131, 57, 185, 92, 198, 185, 152, 106, 27,
151, 244, 48, 204, 12, 195, 49, 97, 148, 26, 165, 173, 127, 178, 56, 38]

A couple of things to note here. First, if you run the same code, you should get a totally different answer. That’s because we seeded the operation with a random initialization vector in order avoid some cryptanalysis techniques that could crack our encryption if we encrypted multiple plaintexts with the same key. Second, the string we started with was 29 characters long, but the cipherTextBytes is 32 bytes long. That’s because AES always encrypts 16 byte blocks, so the encrypt operation padded our original text before encrypting it.

Decrypting is nearly identical:

var decryptPromise = window.crypto.subtle.decrypt(
    {name: "AES-CBC", iv: iv}, // Same IV as for encryption
    aesKey,                    // The key to use
    cipherTextBytes            // Data to decrypt
var decryptedBytes;
decryptPromise.then(function(result) {decryptedBytes = new Uint8Array(result);});
decryptPromise.catch(function(err) {alert("Problem decrypting: " + err.message); });

After running this, I see the contents of decryptedBytes is:

[84, 104, 105, 115, 32, 105, 115, 32, 118, 101, 114, 121, 32, 115, 101, 110,
115, 105, 116, 105, 118, 101, 32, 115, 116, 117, 102, 102, 46]

We have to convert it back to a string to read it:

var decryptedString = "";
for (var i=0; i<decryptedBytes.byteLength; i++) {
    decryptedString += String.fromCharCode(decryptedBytes[i]);

And now when I look at that string, I see:

"This is very sensitive stuff."

Which is what we started with. The decrypt operation removed the padding that the encrypt operation had added, so we’re back to 29 characters.

This code works, but it’s not very useful. You’ve got to put the plainText into the program as a literal and the cipherText isn’t something you can easily display and save. Much worse is that the key and initialization vector aren’t saved, so you can only decrypt things if you haven’t closed the browser window. If you close it and reopen it you’ll get a new key and new initialization vector, which are useless for decrypting old ciphertexts.

My next blog entry will deal with these problems, creating a web page and actually useful JavaScript code for encrypting and decrypting files.

June 19, 2014

Exploring the new Web Cryptography API

Filed under: Uncategorized — Charles Engelke @ 12:17 pm
Tags: , ,

I’m very interested in doing cryptography in the browser for things like end-to-end sealing of data and digital signatures. It’s been possible to do some of these things, but not practical. The W3C’s new Web Cryptography API should change that. I’ve been following its progress with interest, and just discovered that it has been partially implemented in Internet Explorer 11 and Google Chrome 35 (behind an experimental flag). So I’ve started fiddling with it, and I’m going to put notes here in my blog using the webcrypto tag.

Before I could really do anything with the API, I needed to get familiar with a couple of new JavaScript features: Typed Arrays and Promises. You can follow those links for more background, but here’s what I needed to understand:

  • Typed arrays are similar to arrays in more traditional programming languages like C, where every element of the array must be of the same type. It seems they’re always implemented as contiguous blocks of memory, making them much faster for low-level operations than regular JavaScript arrays. The only kind of typed array we seem to need for cryptography is Uint8Array, arrays of unsigned 8-bit integers. That is, arrays of bytes.
  • Promises are JavaScript objects to help work with asynchronous operations. Instead of giving a callback function when you request an asynchronous operation, you’d just have the operation return a Promise. You invoke the Promise’s then method to provide a function to call when the operation is done. If the operation is finished before you call then, that’s fine. The result will be held until then is eventually invoked. You can pass two functions to then if you’d like: the first is called with the result of the asynchronous operation if it succeeds, and the second is called with the error if it fails. Or you can handle errors by passing the error handler to the Promise’s catch method instead. The result returned by calling then is another Promise, making it easy to chain Promises to perform asynchronous operations serially.


The statement var b = new Uint8Array(1024); creates a data structure that can hold one kilobyte. You can work with each byte individually, as in b[25] = 72; b[26] = b[25] + 1; and so on.

Suppose the asynchronous operation doSomething returns a promise. Then you can do the following:

var p = doSomething(param1, param2);
var p1 = p.then(handleResult, handleError);
p1.then(doSomethingNext, handleNextError);

A popular way to write this seems to be:

doSomething(param1, param2
).then(handleResult, handleError
).then(doSomethingNext, handleNextError);

Although I’m considering writing it like this (at least until I know a reason not to):

doSomething(param1, param2).
then(handleResult, handleError).
then(doSomethingNext, handleNextError);

I think that’s all the special background needed to use the API. In my next post I’ll work up to encrypting and decrypting files using AES in CBC mode symmetric encryption.

May 13, 2014

Amazon Workspaces Thoughts

Filed under: Uncategorized — Charles Engelke @ 12:51 pm

I’ve been using Amazon Workspaces since January 2014 and I’m both impressed and disappointed in it. Impressed, because it provides a Windows desktop as a service that is almost indistinguishable to the user from a real local desktop. Disappointed because the only clients Amazon has so far don’t address the most compelling use case I see for the service.

Setting up Workspaces is very easy. Once you have an Amazon Web Services account, log in to the console and select the Workspaces tab. Click the Launch Workspaces button and a wizard guides you through the steps. Basically, you just have to enter information about each user who needs a desktop and click Create Users. The next step is to select one of the pre-configured desktop configurations offered by Amazon. There are four of them right now: Standard (1 CPU, 3.75GB RAM, 50GB disk) and Performance (double what Standard has), and Plus versions of each (add Microsoft Office and Trend Micro Security Services). Then click Launch Workspaces. That’s it.

AWS creates the desktops and sends email to the new users with instructions on installing the client and connecting to their new desktop. The clients are extremely easy to install and use. And the user experience when connected is excellent, much better than the standard Windows remote desktop. Mouse and keyboard responses are instantaneous, and the display automatically adjusts to your client display, and supports dual monitors flawlessly.

I think that the prices are pretty good: $35/month for the Standard and $60/month for the Performance configurations. The Plus versions cost $15/month more. I know you can buy cheap Windows machines, even with Office, for very little money, but if you’re a business you’ll have to manage and support them. Amazon does that for you. Amazon Workspaces is a turnkey, no-worry solution that I find attractive for business use.

Unfortunately, the only available clients are for Windows and Mac desktops and Android and Apple phones and tablets. I find the phone and tablet versions to be nearly useless. Yes, they work, and it’s an impressive achievement, but using desktop apps with just a small, touch interface on a mobile device works poorly. The only actual use I see for this is emergency access, where the pain of using it is offset by the need to get something done on a Windows desktop right away, with no physical desktops directly available.

The Windows client seems pointless. If I have a business that provides my employees with a Windows desktop already, how is a Workspaces desktop useful? I’ll mention some use cases I see for this in a bit, but I find this to be mostly useless.

How about the Mac client? Again, if your business provides Mac desktops, you already have the cost and complexity that the Workspaces desktop is supposed to alleviate. There are a few more use cases for this than if you have a Windows desktop, but not many.

What would be a useful client? One for Chromebooks and Chromeboxes, more than any other. I’ve used them for several years now, and they are ideal in a corporate environment. Any single device can be used by any employee at any time; just log in and all your own data and configuration is just there, and there securely. There is literally zero administration required beyond providing Internet access, and they are inexpensive. Best of all, if one gets broken just give the user a new one. No setup is required at all. And if one gets lost or stolen, don’t worry. The cached local data is stored strongly encrypted with no decryption keys on the device itself. The only downside to Chrome devices is the occasional need for users to run Windows programs, and Amazon Workspaces would be the perfect way to remedy that.

Why isn’t Amazon supporting ChromeOS devices? Chromebooks make up three of the top five best-selling laptops on Amazon, and a Chromebox is the number one selling desktop computer there. It might be difficult to make a ChromeOS client, but (thanks to technologies like WebRTC) I’m sure it’s possible. And Amazon Workspaces has been live for half a year now, which seems like plenty of time. Last November, at the AWS  re:Invent conference, Amazon even stated that they would support Chrome (and Linux, and web browsers in general).

Maybe Amazon has something against Google? They don’t support Amazon Prime Video on Android devices or via Chromecast (their number one selling item in Electronics), even though it’s on their own Android-derived devices. Whatever the reason, Amazon needs to stop dragging its feet and support ChromeOS, or they’re going to leave out the most promising potential market for Workspaces.

Oh, well, with no ChromeOS support, what are the use cases for Amazon Workspaces? I can see a few:

  • BYOD. Companies can have their employees bring their own devices (pretty much all of which are Windows or Mac machines) and have them do all their work on an Amazon Workspaces virtual desktop. No company data is stored on the employee’s personal machine, and any malware doesn’t propagate to the company’s (virtual) PCs.
  • Demonstration machines. Instead of installing, configuring, and maintaining demo systems with demo data on each customer-facing employee’s PC, just set it up in Amazon’s environment and let users share it as needed.
  • Travelers. The risk of data loss and theft is greatest for traveling users. Having that data in the cloud instead of on the device solves that problem.
  • High Internet bandwidth needs. If you want to run an application requiring lots of Internet bandwidth, you can either make sure every user has a big pipe available, or put the application in the cloud. The Amazon Workspaces desktops have fantastic Internet bandwidth, and your client doesn’t need much on its own to connect to them.
  • Rare Windows use. Users with Mac or Linux machines might need to run Windows once in a while. You can always set up virtual machines, but Workspaces is easier.

Windows desktop as a service is a great idea. Lots of companies need to give their users Windows desktops, and they are a pain to manage and support. When you’ve only got a handful or two of them that hassle isn’t very visible, and just seems like normal corporate overhead. But when you’ve got dozens, hundreds, or thousands of them it is a major undertaking and cost. At the first AWS re:Invent conference in 2012 I asked Amazon and many of its partners about such a service, and generally got the brush-off. Citrix in particular said that we should work with its partners to create and run a Citrix-based infrastructure on the Amazon cloud if we needed that. One year later, Amazon announced Workspaces, and I think Citrix lost a great opportunity to lead in this new area.

But Workspaces is not yet a compelling product because it doesn’t remove that hassle from you at all. You still need to manage Windows or Mac PCs in order to get to that cloud-based desktop. There are other, smaller issues with Workspaces desktops, including performance that is less than I’d expect given the claimed specs, but this is the big one. Chromebooks and Chromeboxes are the first thin client. zero management PCs I know of that provide a useful platform by themselves. Adding access to a cloud-based Windows desktop would be a killer product. Come on, Amazon, we’re waiting.

November 9, 2013

AWS Linux AMI Administrative Accounts

Filed under: Uncategorized — Charles Engelke @ 11:36 am

It’s been nearly a year since I posted here. I should get back in the habit. Here’s a useful bit of information.

I want to create new EC2 Linux instances with separate administrative accounts for one or more specific people, for example: john, mary, rob, and sue. I don’t want to use a single shared ec2-user account or any shared SSH key pairs. So, for each of the usernames, I do the following (using “john” in the examples below):

1. Put the public keys of each key pair in a private S3 bucket at a known place, for example my_bucket/publickeys/

2. In my CloudFormation template, I specify an IAM role and policy for the new instance that has permission to read objects in that bucket with the common prefix:

"Action": ["s3:GetObject"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::my_bucket/publickeys/*"]

3. Add the following lines to the UserData start script:

adduser john
echo john ALL = NOPASSWD:ALL > /etc/sudoers.d/john

4. And add a new entry to the “files” section of the CloudFormation template:

"/home/john/.ssh/authorized_keys": {
"source": "",
"mode": "000600",
"owner": "john",
"group": "john"

Step 1 puts the user’s public key in a bucket for letting retrieval by the instance. Step 2 gives the new instance permission to fetch those public keys. Step 3 creates the account without a password for that user and gives the account the ability to use sudo without a password, just like the ec2-user account has. And Step 4 fetches the public key and puts it in the right place for ssh to find it and allow the user to log in with it.

I launch the instance with no specified key pair name, and now any of the desired users can ssh in to it with their own separate account and key pair, and there are no shared credentials. The ec2-user account still exists just in case there’s any need for it to own things, but you can’t log in to it.

December 28, 2012

Provisioning a Server with CloudFormation

Filed under: Uncategorized — Charles Engelke @ 8:47 pm

In my first post on AWS CloudFormation I talked about how to create a machine instance with specific properties. That’s very useful. But what I really like about CloudFormation is how it lets me declaratively provision my new server with necessary software, content of my own, and even running services. I’m going to cover that in this post. But fair warning: there’s a trick needed to make it work. I don’t feel that it’s clearly documented by Amazon, and it took me a while to figure it out. I’ll cover that near the end of this post.

I said that CloudFormation Resources contain Type and Properties keys. But they can optionally have another key: Metadata. Any metadata object defined here can be retrieved using the CloudFormation API. A new instance can retrieve the metadata, and, if it’s the right kind of object, provision the instance according to that specification. The “right kind of metdata” object for this is an AWS::CloudFormation::Init resource. I think you can declare that as another resource and reference it by name in the metadata, but for now we will just put it directly in the metadata. We defined our new server resource last time as:

"NewServer": {
    "Type": "AWS::EC2::Instance",
    "Properties": {
        "ImageId": "ami-1624987f",
        "InstanceType": "t1.micro",
        "KeyName": "cloudformation"

Now we can add the needed Metadata property:

"NewServer": {
    "Type": "AWS::EC2::Instance",
    "Properties": {
        "ImageId": "ami-1624987f",
        "InstanceType": "t1.micro",
        "KeyName": "cloudformation"
    "Metadata": {
        "AWS::CloudFormation::Init": {
            provisioning stuff goes here

What kind of things can you specify in the configuration? The full documentation is here, but let’s complete an example. We will provision a web server with static content that’s stored in an S3 object. That’s pretty simple provisioning: install the Apache web server using the yum package manager, fetch my zipped up content from S3 and expand it in the right place, and start the httpd service. Here’s an AWS::CloudFormation::Init object that will do all that:

"AWS::CloudFormation::Init": {
    "config": {
        "packages": {
            "yum": {
                "httpd": []
        "sources": {
            "/var/www/html": ""
        "services": {
            "sysvinit": {
                "httpd": {
                    "enabled": "true",
                    "ensureRunning": "true"

It’s pretty obvious what most of this does. The packages key lets you specify a variety of package managers, and which packages each one should install. We’re using the yum manager to install httpd, the Apache web server. The empty list as the value of the httpd key is how you specify that you want the latest available version to be installed. You can also use the apt package manager, or Python’s easy_install or Ruby’s rubygems package managers here. The sources key gives a URL (or local file name) for a zip or tgz file containing content to fetch and install. The key (/var/www/html here) is the directory to expand the fetched file to. Finally, the services key lists the services to run on boot. The ensureRunning key value of true specifies that the service should start on every boot. The only tricky part of the services key is that it has one value, always called sysvinit, and that key has the actual services as its children. Putting this all together gives the following template:

    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Create and provision a web server",
    "Resources": {
        "NewServer": {
            "Type": "AWS::EC2::Instance",
            "Properties": {
                "ImageId": "ami-1624987f",
                "InstanceType": "t1.micro",
                "KeyName": "cloudformation"
            "Metadata": {
                "AWS::CloudFormation::Init": {
                    "config": {
                        "packages": {
                            "yum": {
                                "httpd": []
                        "sources": {
                            "/var/www/html": ""
                        "services": {
                            "sysvinit": {
                                "httpd": {
                                    "enabled": "true",
                                    "ensureRunning": "true"

I’ve made the zip file at that URL public, so you can copy this template and try to launch it yourself. Remember, you need to have created a key pair named cloudformation first. Did you try it? Did you notice that all this new stuff had no effect at all? The httpd package wasn’t installed, there is nothing at /var/www/html, and there’s no httpd service running. I had the hardest time figuring out what was wrong, but it turned out to be simple. The Amazon Linux AMI doesn’t do anything with this metadata automatically. You have to run a command as root to have it provision the instance according to the metadata:

/opt/aws/bin/cfn-init -s WebTest --region us-east-1 -r NewServer

The cfn-init utility is the program that understands the metadata and performs the steps it specifies, and the Linux AMI doesn’t run it automatically. If you log on to your new instance and run this command, though, it will do it all for you. You will have to replace WebTest in the command with whatever name you give the stack when you create it. If you’re running in a different region than us-east-1, change that part of the command, too. The -r NewServer option gives the name of the resource containing the metadata you want to use; we called that NewServer in the template above.

That’s nice, but not yet what we wanted. We want CloudFormation to handle the provisioning itself. To do that we have to get the new instance to run the cfn-init command for us when it first boots. And that’s what the UserData property of an instance lets us do. We can just put a simple shell script as the value of the UserData key to make that happen:

/opt/aws/bin/cfn-init -s WebTest --region us-east-1 -r NewServer

Well, as you might guess, it’s not quite that simple. The value of the UserData key has to be a base 64 encoded string of this shell script. There’s a built-in CloudFormation function to base 64 encode a string, and we will use that:

UserData: {
    "Fn::Base64": "#!/bin/sh\n/opt/aws/bin/cfn-init -s WebTest --region us-east-1 -r NewServer\n"

Note the \n characters to terminate each line. Put this in as a property of the server, giving the complete template:

    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Create and provision a web server",
    "Resources": {
        "NewServer": {
            "Type": "AWS::EC2::Instance",
            "Properties": {
                "ImageId": "ami-1624987f",
                "InstanceType": "t1.micro",
                "KeyName": "cloudformation",
                "UserData": {
                    "Fn::Base64": "#!/bin/sh\n/opt/aws/bin/cfn-init -s WebTest --region us-east-1 -r NewServer\n"
            "Metadata": {
                "AWS::CloudFormation::Init": {
                    "config": {
                        "packages": {
                            "yum": {
                                "httpd": []
                        "sources": {
                            "/var/www/html": ""
                        "services": {
                            "sysvinit": {
                                "httpd": {
                                    "enabled": "true",
                                    "ensureRunning": "true"

If you create a stack called WebTest with this template, you should get a new instance already running the Apache web server, with a couple of pages of content installed and already available. Give it a try. For me, at least, this was a success!

There are still a lot of rough edges. What if you don’t want to call your new stack WebTest? What if you want to run it in other regions? How about dealing with protected resources? Creating resources that interact with each other? Getting better reports of how to access resources that are created? Letting the user specify parameters to control the stack? I’ll cover some of that in future posts.

December 27, 2012

Windows Printing to an Airport Extreme Connected Printer

Filed under: Uncategorized — Charles Engelke @ 11:34 am

[Update: Got Windows 8 or RT? Mark Allibone has published an update on how to do this at his blog!]

[Update: @colinc on Twitter says “Hi, a comment on your post re printing on airport. Some windows vs use port 9100,Apple now uses 9101, it may need updating”]

Want to print from your Windows 7 PC to a USB printer connected to Apple’s Airport Express? Well, you can do what Apple says:

  1. Install Apple’s Bonjour for Windows
  2. Run the Bonjour Printing Wizard, answering its questions one by one
  3. Print!

And that works. At least it did for me. For some definition of “works”:

  • It showed the correct printer, but selected a driver for a different printer (that didn’t work at all)
  • It was easy to switch to the right driver, which worked
  • But it would only print black-and-white to my color laser printer
  • And would only print one job. Subsequent print jobs from the same or any other PC or Mac did nothing until you turned the printer off and back on.

Or, you could do what ended up working for me. The key points of my solution are:

  • Do not install any Apple software on your Windows PC
  • Do not pay any attention to anything Apple says regarding printing from your Windows PC

Instead, just use the regular Windows 7 Install Printer wizard. There are a lot of steps, but they’re easy.

  1. Select Devices and Printers from the Windows Start menu
  2. Click Add a Printer
  3. Select the Add a local printer option (yes, it’s not local, but that’s Microsoft for you)
  4. Click Create a new port, and select Standard TCP/IP Port from the drop-down list, and click Next
  5. Fill in the Hostname or IP address with the address of your Airport Extreme router. That’s probably, but you can check it by running the ipconfig command from a command prompt and looking for the Wireless LAN’s Default Gateway address. Leave Port name at whatever it fills in, uncheck Query the printer and automatically select the driver to use, and click Next.
  6. The wizard will say it’s Detecting the TCP/IP port. It should find the device. If not, you probably entered the wrong IP address. Check it and try again. If it still fails to detect it, don’t worry about it and continue anyway.
  7. Select Network Print Server (1 Port – USB) from the Standard Device Type list. The default Generic Network Card would probably work okay, but I didn’t try it. Click Next.
  8. Select your printer’s Manufacturer from the list, then select your specific printer from the Printers list, then click Next. If your printer isn’t there, you’ll have to download a driver and use the Have Disk… option.
  9. Fill in a Printer name, or leave the name it fills in for you alone. Click Next.
  10. Decide whether to share the printer or not. Since other devices on your network can print directly to the Airport Extreme, why bother to share it? I selected Do not share this printer and clicked Next.
  11. Decide whether to Set as the default printer, and try to Print a test page, then click Finish.

This worked for me on two different Windows 7 PCs. They now print in color, and jobs submitted after they print also print.

December 19, 2012

Learning about AWS CloudFormation

Filed under: Uncategorized — Charles Engelke @ 5:05 pm
Tags: , , ,

I’ve been using Amazon Web Services as the infrastructure for some products for a while now. A big advantage of running in the cloud is being able to automate creating, updating, and destroying servers. So far, we’ve been doing this by writing scripts. Now it’s time to move up to the next level of sophistication and use their CloudFormation service instead. That not only supports automated launching and provisioning an new servers, it supports automatically creating a whole bunch of interconnected services all at once. And it’s declarative, specifying where you want to end up, instead of procedural, specifying how to get there.

CloudFormation looks pretty simple at first, but I’ve found out that it really isn’t. You need to handle a lot of details, and the documentation isn’t always clear (to me), nor even always complete (as far as I can tell). And there aren’t enough complete examples. So, as I learn about it I’m going to blog about what I discover.

I’ll start with the simplest case I need handled: launching and provisioning a single server. I have to write a template specifying what I want CloudFormation to do, and then use CloudFormation to create the stack defined by that template.

CloudFormation templates are documented in the Working With Templates section of the AWS CloudFormation User Guide. Each template is a JSON document representing an object (basically, key/value pairs). The general format of that JSON object is:

   "key1": "value1",
   "key2": "value2",
   "keyN": "valueN"

The order of the key/value pairs is irrelevant. Another JSON document with the same pairs in a different order would be considered to represent the same object. Note that the keys are quoted strings, separated from the values with a colon. Key/value pairs are separated (not terminated) with commas. And values can be quoted strings, as shown here, or numbers, or JSON objects themselves. They can also be arrays, which are comma-separated lists of values enclosed in square brackets. Don’t worry too much about the details; we’ll see all of this in the examples.

CloudFormation template are JSON objects with some of the following keys:

  • AWSTemplateFormatVersion
  • Description
  • Parameters
  • Mappings
  • Resources
  • Outputs

All of these keys are optional except for Resources. Resources are the things that CloudFormation is going to create for you, so if there are none, there’s not much point to having a template anyway.

Although the AWSTemplateFormatVersion is optional, and there’s only ever been one version declared so far, I’m always going to include it. The only legal value for it is “2010-09-09”. The Description is also optional, but again, I’ll always include it to help me keep track of what I’m trying to do. So my template is going to start taking shape:

    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Create a basic Linux machine",
    "Resources": {
        something needs to go here!

I need to fill in the Resources section with at least one key/value pair. The key is going to be my logical name for the resource. It can be just about anything (I haven’t pushed the limits though), because CloudFormation doesn’t care. I’ll just call this NewServer. The value is always an object with Type and Properties keys. The possible types are listed in the Template Reference section of the User Guide. To create an EC2 instance, use a Type of AWS::EC2::Instance.

The Properties object contains different possible keys for different resource types. The possible keys for AWS::EC2::Instance are listed in that section of the Template Reference in the User Guide. Only two keys are required: ImageId, which is the ID of the AMI to use for the new instance, and InstanceType, which tells what kind of instance to launch. Actually, in my experience I’ve found I can omit the InstanceType and it defaults to m1.small, but that may be a bug, not a real feature. The documentation says InstanceType is required, so I always include it.

I want to launch a standard 64-bit, EBS-Backed Amazon Linux instance in the US-East-1 region. According to the Amazon Linux AMI web page, the ImageId is ami-1624987f. I’ll save money by using a t1.micro instance. Putting all this together, I get the following template:

    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Create a basic Linux machine",
    "Resources": {
"NewServer": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-1624987f",
"InstanceType": "t1.micro"
} }

Now to create this stack. Log in to the AWS Management Console and select CloudFormation. (If you’ve never used it before, you’ll be walked through a few sign-up steps to verify your identity. A few minutes later, you’ll be able to use the console.) It currently looks like this:


I made sure I was in the right region (N.Virginia showing in the upper right corner), clicked Create New Stack, then filled in the blanks. I put my template in a file called cf.json, and selected it for upload:


Then I clicked Continue. I had the option to enter some tags, which would be applied to the stack and to every resource it created. I just clicked Continue. Finally, I had a confirmation box:


I clicked Continue, and my stack started building. I closed the acknowledgment window and looked at the console. The upper part showed all my stacks. There was only the one I just created. When a stack is selected, the bottom part shows its properties. I selected the Events tab for the screen capture below:


Eventually, CloudFormation finishes, either successfully or with an error. In that latter case, it will usually roll back all the steps it took automatically. Otherwise you can click Delete Stack to get rid of everything it created.

In this case, everything worked. The Resources tab lists everything that was created. That’s just the NewServer resource, which is an AWS::EC2::Instance. It also shows me the ID of that instance. If I want to log in to that server I’ll have to look up its address in the EC2 section of the console. However, I’m not going to have much luck with that because I did not specify a key pair when creating the machine, so it’s impossible for anyone to connect via ssh.


KeyName was an optional property I could have specified, but didn’t. The reason it’s optional is that you very well might want to create an instance nobody could log in to. That’s not true in our case, so I fixed it. First, I cleaned up the stack I created that I can’t use. I selected it in the console and clicked Delete Stack. The stack and every resource it created was destroyed. Next, I went back to the template and specified a KeyName value. It had to already exist as a Key Pair in the US-East-1 region. I happened to have one there named cloudformation, so I used it. The updated template:

    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Create a basic Linux machine",
    "Resources": {
"NewServer": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-1624987f",
"InstanceType": "t1.micro",
"KeyName": "cloudformation"
} }

Repeating the steps above I got a running Linux machine. This time, that machine was associated with the cloudformation key pair, so I could  log in via ssh. Success!

Instead of the console, I could have used the cfn-create-stack command line tool. Or I could have written a program that invoked the REST API for CloudFormation. Each method looks about the same to AWS, and gets the same result.

But what’s the point? I could have created this instance directly with the EC2 console, or command line tools, or REST API. And it would have been at least as easy. Easier, in fact, in my opinion. That’s because I haven’t tapped into the real powers of CloudFormation yet:

  • Provision created servers with specified packages, files, software, etc.
  • Create (and manage) multiple resources that work together

I’ll get started on those more useful, and more interesting, things in my next post. But before I go, I’d better remember to go back to the CloudFormation console and delete the stack I created, so I don’t keep paying for that server.

September 23, 2012

Peter Bell’s talk on Next Generation Web Apps using Backbone.js at #StrangeLoop

Filed under: Uncategorized — Charles Engelke @ 5:47 pm

I don’t expect to have as many notes here as at my last session, because I’ll be trying to code the examples as we go. Also, the conference network is completely worthless; I’m using a Verizon MiFi, but my PC keeps dropping the connection (probably because the Apple device doesn’t like talking to a Samsung one).

We’re starting with an overview of all the well-known JavaScript MVC-ish frameworks. There are a lot of them. But at this point, I want to learn about Backbone, not frameworks in general. And we eventually get there.

We start with routers, which tell which JavaScript function should be invoked for various URLs. For example, the view for “about” would be a specific function that would be invoked when the URL ended in “#about”. We move on to views and models.

After some general overview, we start working on an example, the ToDoMVC app from Addy Osmani. At which point we start looking at tiny text in the presenter’s editor, as he tries to find his way around the example.

And, we’re at the break, and I’m leaving. This talk has been disjointed and confusing; I can browse through the example code by myself. Maybe I can sneak into the second half of another session.

Neal Ford’s Presentation Patterns talk at #StrangeLoop

Filed under: Uncategorized — Charles Engelke @ 4:00 pm

Today’s the workshops day at Strange Loop 2012, and I’m starting out with Neal Ford’s talk. I give a lot of presentations, and can always use help making them better. We’re getting a late start because the other first day activity – the Emerging Languages Workshop – ran a bit late in the morning, so the optional lunch for us workshoppers ran a bit late, too.

While we’re waiting, he showed us a PowerPoint file he uses as a “projector sanity check”, showing how it handles each color, clipping edges, different contrast ratios and a check for dead pixels. That’s going to be useful.

He focuses on some antipatterns that we should avoid.

Antipattern: Bullet-riddled corpse

Put up a bullet list, and everybody will read it right away. And then you’ll have to cover the material again and get them to pay attention to all things you’re saying that weren’t in the bullet points.

Antipattern: Floodmarks

These are like watermarks, but there are just so many of them. Trademarks, icons, and so on drowning out your presentation. This often happens when a conference requires you to use their templates. Ford says to fight this. Which he did by submitting a slide deck that complied with their template, but then ran his real deck off his own laptop.  And then won an award for the best presentation at the conference.

Floodmarks are okay on the first and last slides, but all the others should be blank canvases. And don’t put your company name on every slide, nor the the copyright notice except for the first slide. They’re just “noise”.

Infodeck versus Presentation

Infodecks and presentations look alike, but they’re totally different. An infodeck is static, while a presentation uses time through transition (moving between slides) and animation (movement within a slide). You standing in front of an infodeck isn’t adding value. An infodeck is like an essay, but presented in slides instead of paragraphs.

Pattern: Know your audience

Anticipate the questions they’ll have and put the answers into your presentation.

Pattern: Have a narrative arc

Just like telling a story: introduction and exposition, complication, climax, resolution. There may be several “subplots” each with it’s own narrative arc in your overall story.

How showed a tiny “slide sorter” view where you couldn’t see the slides, but he marked which were showing the problems and which were showing solutions, illustrating the narrative arc.

Pattern: Brain breaks

Every ten minutes or so people’s attention tends to lag, so you need a break to bring them back. Humor, violence, or sex all do that. Don’t use sex or violence in a technical talk! So put a bit of humor in every ten minutes or so.

Pattern: Unifying visual theme

Tie everything together implicitly this way.

Antipattern: Alienating artifact

Don’t try to get attention in ways that will alienate part of your audience. Sex and bigotry are good ways to do that.

Pattern: Fourthought

A pun on forethought. There are four parts: ideate (he uses mind maps), capture (in some concrete form), organize (get them into an outline, either in your presentation tool or – he uses – externally), and design (render into your presentation tool).

Pattern: Lightning talk

A short talk (usually timeboxed to five minutes or less), sometimes with a fixed format of slides. He’s going to have us do that as an exercise. Make sure it has a narrative arc, feel free to use other patterns.

Pattern: Intermezzi

A bridge between two pieces.

Antipattern: Cookie cutter

Some (most) ideas fit on more than one slide, but because slides are the “atoms” of your tool, you tend to try to fit an idea onto a single side. But more slides cost nothing, so get over over that.

Note that the “infodeck” concept wants you to use fewer slides, but for a “presentation” more slides have no downside.

Note: auto-size text is evil. Don’t let the tool encourage you to cram more things on one slide.

Antipattern: Hard transitions

One wall of text gets immediately replaced with another wall of text. The alternative?

Pattern: Soft transitions

Have a fixed element, with varying other elements that come and go. Dissolves are one way to do that. But using no transitions forces a choppy narrative, while soft transitions all you to control the flow. He also calls this a “charred trail”. Title comes up alone in the middle of the screen, then moves to the top with points coming below it, dissolving as each new one comes in. He calls this exuberant title top plus charred trail. They can print well, too.

Aside: every few minutes he brings in slides from his Halloween parties and contests. Brain breaks in practice, and an example of…

Pattern: Vacation photos

Use full-screen, high-quality images and few or no words, so long as they are relevant to your theme.

Antipattern: Slideument

That is, a slide plus a document. Try to have one deck be an infodeck and presentation slides. There are patterns that can make this less bad, but it’s still a bad idea.

Pattern: Context keeper

Example: a visual element for that context, that’s included in each slide that talks about that context. His example was “litmus tests” that he showed with actual test strips, which he moved around the corners of different parts of each slide taking about his metaphorical litmus tests.

Another example is backtracking. Have a slide introducing something, then a different slide illustrating it, then back to the first, but expanded to have more of the idea.

Antipattern: Demonstrations versus Presentations, and Live Demo versus Dead Demo

Most of the time live coding is primarily ego gratification for the presenter. Not always, of course. Tutorials and product demos use live coding well. But in a technical talk, doing all that typing is just noise.

So, tutorials good, technical deep dive bad. Product demos good, exuberant tool interaction bad. Hands-on classes good, time consuming tasks bad.

There are ways to get the benefits of live coding without doing it.

Pattern: Traveling highlights

Show code as a screen shot (with syntax highlighting) not as text. Then highlight the part you want to show, one after another. You get the kind of motion live coding gives you, without the distracting mechanics of it. Can use a colored background, or reduce the contrast of the rest of the code.

Use the screen shot even if you can get syntax highlighting and coloring in your own tool, because you don’t want the temptation to edit anything inside the presentation tool. You’ll make a mistake and not catch it.

Another option is to capture a movie of the dynamic stuff you would otherwise do live, and show that as a video in your presentation. He calls this lipsync. But don’t use this to fake anything; let the audience know it’s recorded.

Pattern: Invisibility

If you want everybody to look at you for a minute, so you can make a point, use a black slide.

Antipattern: Stale content

Leave a slide up after you start talking about something else. If you don’t want another slide for the next point, use the invisibility pattern.

There was a lot more content, too much for me to note here. And we did an exercise where we created lightning talks that was great. I really recommend his ideas. I haven’t yet read his new book, Presentation Patterns: Techniques for Crafting Better Presentations, but I feel comfortable recommending it. I got a copy as part of the talk, and will be cracking it open tonight.

September 17, 2012

AES Encryption with OpenSSL command line

Filed under: Uncategorized — Charles Engelke @ 5:03 pm

I know I’m going to forget this command line, so I’m documenting it here.

To use AES with a 128 bit key in CBC (cipher block chaining) mode to encrypt the file plaintext with key key and initialization vector iv, saving the result in the file ciphertext:

openssl aes-128-cbc -K key -iv iv -e -in plaintext -out ciphertext

To decrypt, change that -e to -d.

Warning: the values of the key and the iv must be typed in hex.

June 24, 2012

Switching to Mac?

Filed under: Uncategorized — Charles Engelke @ 1:58 pm
Tags: , , ,

I’ve been a PC user for nearly 30 years, first with MS-DOS and later with Windows. I’ve been very happy with them, though ever since Apple switched the Mac operating system to be Unix-y I’ve thought I might prefer that. A stint with the iPhone soured me on Apple, though, and I thought little more about it.

Now my Incubator group has started developing mobile apps. We did Android prototypes first and are getting good responses. But we clearly need to support iOS, too, for any products we actually release. So we got a Mac Mini at work to develop with, and I decided to buy a cheap MacBook Air to get familiar with the environment. (Well, not that cheap… I ended up with the 13″ box with 8GB of memory and a 256GB drive instead of the entry level one.)

I’ve been using the Mac for almost a week now, and am really liking it. So, am I switching?

Maybe. There’s a lot to like about it, and little on the negative side.

The good:

  • Boy, this thing is fast. Though, to be fair, a similar Windows laptop with plenty of memory and SSD probably would be just about the same. I think my experience this week is the death knell for spinning drives on any laptop I own from now on.
  • Very portable. The 13″ box is a lot bigger than I expected. Still, it’s plenty small enough to take everywhere I travel.
  • Great for software development in my preferred target environments (Linux, web, and mobile). Ruby and even Perl don’t support Windows nearly as well. It’s even better (so far) for Android development! I was able to connect my Samsung Galaxy Nexus phone and run it in debugging mode in the first try; that’s yet to work on Windows due to the Samsung USB drivers.
  • Awesome trackpad. Everybody says so, and they’re right.

The bad:

  • Lousy keyboard. Yes, most Windows laptop keyboards are worse than this, but I always use ThinkPads, and their keyboards are much, much better than this.
  • Missing keys. I want Home, End, and Delete keys! And I’d like Page Up and Page Down, too. I’m slowly learning the various keyboard shortcuts, but those dedicated keys are very useful, especially for coding.
  • No TrackPoint pointing device. It’s not the most popular option, but it’s by far the best. Yes, even better than the trackpad. (Though having both would be awesome!)
  • No documentation. I knew there were virtual desktops available, but I had no idea they were called Spaces. And even when I figured that out, how was I supposed to know that you get them by hitting F3 and pointing to the upper right corner of the screen? And that the Apache web server configuration is in /private/etc? Thank God for Google, or I’d be lost.
  • Dongles. I’m going to need to buy some if I ever want to connect a monitor or a wired network. I don’t like that, even though I kind of understand it in an ultra-portable like this.
  • Text editor. I have tried a couple on the Mac, and haven’t found one I like much yet. They all seem to de-emphasize the keyboard, which I prefer to use, especially for selecting and moving text. And the one I like best so far (Sublime Text 2) has inadequate documentation.

The same:

  • The hardware is equally good on both sides (I’m comparing the MacBook Air to the ThinkPad X series here).
  • They both have good, though not great, battery life (I can go a whole business day on battery with either, so long as I let it sleep when not in direct use).
  • I think that the matte display on the ThinkPad is actually better, but the glossy one on the Air has more initial impact.
  • Almost all the software I use on Windows I’m now using on the Mac. It’s all no-cost in both places, too.

So, am I switching? It seems likely. I’m going to take the Air with me on a two-week trip and see how I get along without having Windows ready if I need it.

June 6, 2012

Grayed out

Filed under: Uncategorized — Charles Engelke @ 9:48 am

Why is light gray text becoming so popular on the web? More and more web pages look like copies from a machine that has run out of toner. If you want your text to be read, make it readable. The best web designers put readability ahead of appearance.

Just consider this snippet from


Isn’t it better with black text?


And don’t get me started on gray text on gray backgrounds, or trendy fonts that display on monitors with single pixel-wide strokes, or tiny font sizes in wide viewports.

Please, don’t make me open the browser’s debugger and edit your CSS to make your text readable. 

May 29, 2012

Fluentconf workshop: Backbone.js Basics and Beyond

Filed under: Uncategorized — Charles Engelke @ 7:29 pm

Unlike my first workshop today, my second workshop at FluentConf covers a subject completely new to me:  Backbone.js. I’ve heard a lot about it, but never even downloaded it. Looking forward to learning a lot.

“Backbone thinks of itself as being lightweight.” It isn’t opinionated like Ruby on Rails, so Backbone projects can do the same things in very different ways. She’s going to show her ideas of the best way, but our ideas may vary.

Backbone is not MVC, even though parts of it have the same names as in server-side MVC frameworks (Models and Views). Backbone adds Templates to those two, not controllers.

The speaker came to JavaScript through Rails. At the time that meant that Rails wrote her JavaScript; she didn’t have to. Now she feels that is kind of like using scaffolding – a shortcut that won’t carry you far enough. Next, she used jQuery extensively. That’s powerful, but can be messy and hard to test other than with something like Selenium. Phase 3 was page objects. Create a unit testable object that has the JavaScript for the page. That seems to describe how she uses Backbone.

Backbone gives you model mirroring, views that handle events (and can render DOM). Models in Backbone are like MVC models and may mirror server-side ones (or something like them rather than one-to-one). Server-side views correspond to Backbone Templates. Server-side controllers correspond to Backbone Views.

The talk covers various tasks you need to perform, and how to do them with Backbone, ending with how it all fits together. I wish that had come first. Maybe it’s me, but I need the overall context to be comfortable with the pieces. Basically, set it all up by creating an app object with an initialize method that you call when your document is ready. That can set up the model, fetch the data, and use a view to render it.

Testing? Pivotal uses Jasmine, and there’s a talk about it tomorrow at 1:45.

Backbone is really good at interacting with a RESTful API, living in harmony with other frameworks and styles of JavaScript, and handling unique applications (due to its flexibility). On the other hand, it doesn’t have UI widgets, and it’s not good for developers who aren’t already strong in JavaScript (because it doesn’t give enough direction to them).

The talk is over very early. And all in all, I’m disappointed. I go to a half day workshop expecting to come away ready to actually create something with my new knowledge, not just get a survey of the topic. I could have learned as much about Backbone in a 30 minute talk as in this workshop.

Fluentconf workshop: Breaking HTML5 Limits on Mobile JavaScript

Filed under: Uncategorized — Charles Engelke @ 3:28 pm
Tags: , , ,

O’Reilly’s Fluent Conference starts today with optional workshops. My morning selection is on JavaScript on mobile platforms, given by Maximiliano Firtman of ITMaster Professional Training. This post is just a stream-of-consciousness list of points I want to remember, rather than real notes for the talk.

In his introduction, he points to a resource on available APIs:

Mobile web development is different:

  • Slower networks
  • Different browsing (touch versus mouse, pinch to zoom, pop-up keyboard, etc.)
  • Different behavior (only current tab is running, file uploads and downloads)
  • Some browsers are proxy based (Kindle Fire, Opera Mini)
  • too many browsers (more than 40), some too limited, some too innovative, mostly without documentation, mostly unnamed, most without debugging tools
  • Four big rendering engines, five big execution engines

Check for browser market shares. Much more even distributed among top seven dozen browsers.

Web views embed an HTML window in a native app. On iOS, web views have a different execution engine than the browser (2.5 times slower!). They often have differences in how they support HTML5 APIs.

Pseudo-browsers (his term) are native apps with a web view inside. They you don’t get a new rendering engine or execution engine, you just get new behaviors added by the native shell it is wrapped in. (Yahoo Axis, for example)

(Note to me: he’s using IPEVO P2V for Point2View cameras showing a mobile phone on a camera.)

Phonegap and similar tools for creating native apps use web tools but are native.

Remote debugging is available for some browsers with Remote Web Inspector. Adobe Shadow is a new debugging tool that’s free (at least, for now). Weinre can work with Chrome, making iPhone remotely debuggable. Looks pretty interesting.

Paper Who Killed My Battery from WWW2012 shows how different web sites consume power from your device’s battery. For example, 17% of the energy used to look at Amazon’s web site is for parsing JavaScript that isn’t ever used.

The speaker has a neat development tool he calls Chevron for working inside the browser, available at It has an in-browser code editor, and can save on-line to a unique URL. It will display a QR-code for that URL, so you can see what you’re developing on your mobile device as well as the built-in browser window. Very nice.

A service at will run your public web page on a real device of your choice, and give you performance metrics on it.

You can build a real app (even offline) in the browser with HTML5, but it doesn’t look native on a mobile device. But (for Apple and maybe others) you can get it a lot closer with some meta tags:

<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black">
<link rel="apple-touch-startup-image" href="launch.png">

A lot of the second half of this talk is more on HTML5 in general (when it works in mobile browsers, too) than specific mobile issues. Most of the audience is finding this very useful, but it’s not new to me. Unfortunately, it doesn’t seem that he’s going to get to the Device Interaction part of his demonstrations, which I would really like to see. I can always fiddle with them myself later, I guess. But he’s a good speaker and I’d like to hear him talk about them.

You can use the orientationchange event (onorientationchange property) to run code when the device moves between portrait and landscape views. You can also check for going on and off-line with the online event (though this is not generally reliable).

Ah, he’s getting to Device Interaction! Geolocation first, which is neat but has been available for a while. But then a lot of really new capabilities, some of which only run on one or two browsers now. I need to start using Firefox on my Android phone.

A very useful talk and good kickoff to the conference for me.

March 10, 2012

Dehydration and popping ears

Filed under: Uncategorized — Charles Engelke @ 10:52 pm

A few days ago I was taking a tour of Corcovado National Park in Costa Rica and I noticed that my hearing was muffled. I don’t hear that well normally, but this felt like I was wearing earplugs. That sometimes happens when I fly, and I just yawn to make my ears pop and the problem goes away.

That didn’t work this time. My ears stayed muffled through the whole day out. But after the tour and the long boat ride back, they finally popped and my hearing came back to normal during the drive back to my hotel.

What happened? I was severely dehydrated during the tour (the tour operator did not bring nearly enough water for such a hot day) and I picked up several liters of bottled water on the drive home. Just as I was finishing the first liter, my ears popped and my hearing came back. I searched for information on this condition and found a lot of pages saying that heavy exercise can cause it and cooling down will make your ears pop again. But I had several hours sitting on the boat after the exercise and my ears did not pop until I got a lot of water in me. I’m certain that this was caused by dehydration.

I had something similar happen at work a few months ago, and I now think it was also due to dehydration. My doctor had me nearly eliminate caffeinated and carbonated drinks and I hadn’t yet got used to making up for it with a lot more water. Looking back, I think the dehydration affected my Eustachian tubes. Clearly, I have to pay more attention to getting enough to drink.

February 28, 2012

mod_perl Problems

Filed under: Uncategorized — Charles Engelke @ 2:04 pm

I’ve just spent days trying to get mod_perl to work with Perl 5.12 or later, and it’s finally there on both Windows and Linux. I may post more detailed notes, but before I forget, here’s an important note to me.

The file I needed for ActiveState Perl 5.12 and Apache 2.2 can be downloaded from Specifically, .

I found a bunch of other downloadable binary versions of this file, but none of them worked with my 32-bit Windows Apache and ActiveState Perl 5.12. This one did.

I haven’t found any that work with Perl 5.14 for Windows.

January 8, 2012

Chrome Web App Bookshelf – Part 7 of 7

Filed under: Uncategorized — Charles Engelke @ 1:36 pm

Note: this is part 7 (the final part) of the Bookshelf Project I’m working on. Part 1 was a Chrome app “Hello, World” equivalent, part 2 added basic functionality, part 3 finally called an Amazon web service, part 4 parsed the web service result, part 5 was useful enough to publish, and part 6 covered publishing it at my own website. This post finishes the series by publishing in the Chrome Web Store instead.

I’m almost done with getting my app out into the world now. I just have to put in the Chrome Web Store. Once that’s done I intend to update it with new features from time to time, but probably won’t post about that in any detail. Instead, I’ve put this project on Github. If you’re interested, you can follow its development there.

Following my practice to date I haven’t bothered to read any of the documentation about the store. Instead I just looked for information on how to develop for it, starting with the Settings icon in the upper right corner of the page:

Settings icon in Chrome Web Store

When I clicked on the gear icon the drop-down menu showed a choice for Developer Dashboard, so I chose that. The resulting page looks like it’s going to guide me through the process pretty easily. There’s a link to “Start uploading your apps now!“. Seems promising…

Developer dashboard section to Upload your app

It sure looks easy. I’m not supposed to upload the CRX file, just a ZIP of the directory. I just posted such a file for my previous entry. I’m a bit worried because I put in an auto-update URL that doesn’t make sense for the app store, so I’m going to remove both the homepage and update URLs from the manifest before uploading to see what happens. I’m also going to increment the version number to reduce my own confusion.

When I uploaded the resulting ZIP file I got a page showing how the web store sees it, starting with the icon:

App summary with placeholder icon

Where’s my icon? A bit further down the page I’m offered the chance to upload another icon, saying it should be 96 pixels square. But instead I just uploaded my 128 by 128 icon again, and it took it and it looks good:

Chrome store summary showing my icon

Going down the page, I’m asked for a detailed description, so I filled it in as follows:

Want to keep track of books that haven’t been published yet, so you can decide whether to buy them when they’re ready? This app allows you to add books by ISBN or Amazon’s ASIN (including Kindle books) and keep a list showing their scheduled release date and shipping status. When you’re ready to buy, just follow the link from the app.

This app uses Amazon’s Product Advertising API, so you will need an Amazon Web Services account to use it. Accounts are free to register, and this particular API incurs no charges.

Next, I’m asked for a screen shot and at least one “promotional image”. Huh. Okay, I’ll make them up. My promotional image is just a large version of the icon on a dark background. After I filled in the rest of the form as best I could, I saved the draft, returning to the dashboard:

Dashboard showing current status with publish link

Okay, let’s try it. I pressed Publish. And got a confirmation:

Publish confirmation image

Okay, let’s try it. And… well, I was kind of expecting this:

Pay $5 now

I need to pay this once to register. That’s not much, so I went ahead and paid it through Google. I then had to click Publish again, and the listing now shows an option for Unpublish. I guess I’m done. When I click the link showing for the item, it looks like I’ve got it in the store!

My app in the Chrome Web Store

And when I click to install it, it first shows me the permissions it requires:

Install confirmation

And it installed it just fine. I’ve got two nearly identical apps showing now, the previously packaged one and the new one from the store. Chrome thinks they are different because they have different unique IDs. They don’t share storage either. I’ll leave the old app there for a while, but don’t intend to update it.

Will this make the app sync to each of my browsers? It hasn’t as of this writing, but I’ll give it some time. Enhancements to this app will be posted on my Github repository, and published in the Chrome Web Store for anyone that wants to keep following it.

[Update a few hours later: the extension did finally sync to my other computers! The data did not, which I expected. I can either do that with some other facility (perhaps AWS SimpleDB) or wait until Chrome adds an API for that, which I think is in the works.]

January 7, 2012

Chrome Web App Bookshelf – Part 6

Filed under: Uncategorized — Charles Engelke @ 3:52 pm

Note: this is part 6 of the Bookshelf Project I’m working on. Part 1 was a Chrome app “Hello, World” equivalent, part 2 added basic functionality, part 3 finally called an Amazon web service, part 4 parsed the web service result, and part 5 actually was somewhat useful. The series is almost done now. This post will cover packaging and privately publishing the app, and the next and final post will cover putting it in the Chrome Web Store.

There’s more functionality I want to add to the app eventually (updating the data on saved books, deleting books from the list, and even synchronizing the list of books between PCs) but the purpose of this series is to show how to create and publish Chrome web apps. My app is just barely functional enough now to publish, so I’m going to go ahead and do that.

Since my app contains software and images from others I’m going to have to add some acknowledgment of that fact. I want to be sure I’m complying with the license conditions when I distribute those pieces, and I should also specify whatever the license conditions are of my app. So I added a file called LICENSE to my project directory, spelling out my license terms. You can see the current version of that file here. As you can see, I chose the MIT license for my app because I feel that’s one that least encumbers the users.

One licensing issue I encountered was that the Stanford JavaScript Crypto Library includes patented code, and the conditions of its use apparently require either purchasing a license to the patent or using only the GPL, at least in the United States. I’m not a lawyer so I might not be understanding this clearly, but I don’t want to violate the terms or intent of that patent holder. Other than that issue the library can be licensed under the BSD license, which seems to be compatible with the MIT license I chose. That library includes tools to build subsets of the whole thing, so that’s what I did. The patented code is used in cipher algorithms, so I built a library without ciphers (in fact, it has the minimum functionality that I need) and am including only that. I believe that means I’m fine distributing it as part of my MIT licensed app.

I also added copyright notices to the files I created: main.html, main.js, aws.js, and main.css. I don’t think that the manifest file is actually a creative work, so didn’t put any copyright notice in it. I don’t even know where it could have gone had I wanted to add one.

I know a lot of people think that explicitly specifying copyright and license conditions aren’t really necessary unless you want to restrict use of your work, but as someone who builds commercial software for a living I can tell you that they’re very important even if you don’t want to restrict that use. Without clear indication of the conditions, nobody would risk reusing what you created in any professional or commercial endeavor.

Okay, the app has been slightly polished up, is (just barely) functional enough to actually use, and has clear claims and acknowledgment of ownership and licensing. I’m ready to publish it. But how?

I started by recognizing that once I publish it I will want to update it at times. Every update should have a higher version number. So far I’ve left that number as “1” in the manifest, but for publishing I’ll take advantage of the fact that Chrome allows that version number to be up to four integers, separated by periods, and start over at version “”. With that done, I can package the app using any running copy of the Chrome web browser.

First, open the Extensions panel by choosing Tools/Extensions from the drop-down menu you get when you click the wrench icon in the upper right hand corner of the browser. When I do that, I see the unpacked version of the app I have been working on:

Extensions panel showing the unpacked app

When I click the Pack extension… button I get the following dialog box:

Pack Extension... dialog box

I enter the directory I have been working in, leave the Private key file field empty, and click Pack Extension. Chrome tells me what it has done now:

Message shown after packing

It created a Chrome extension file (ending in .crx) and a new key file for me to use (ending in .pem), and told me where they are. Next I opened the file in Chrome by dragging and dropping on to the browser, and was asked whether to install it. After I said yes, the Extensions panel showed the app twice:

Extensions panel showing two versions

The top one is the unpacked app I’ve been working on, and the lower one is the actual packaged application. I then removed the unpacked version and tried out the packed one. It worked!

But it’s still not really ready. If I create a new version there’s no way that Chrome will know about it. If I host the app on a web site I can configure the manifest so that Chrome will regularly check for updates and install them if they exist. But to make that happen I have to also create an XML file describing the current version of the app. I guess Chrome doesn’t want to have to download a whole app just to see if it’s updated, and prefers a small XML file for that. I have to put the URL for that XML file in the manifest. While I’m was at it I also added the URL of a home page for more information about the app and incremented the version number. The manifest file now looked like:

   "name": "Books to Buy",
   "description": "Keep a list of books to buy on Amazon, with their price and availability",
   "version": "",
   "app": {
      "launch": {
         "local_path": "main.html"
   "icons": {
      "16":    "icon_16.png",
      "128":   "icon_128.png"
   "homepage_url": "",
   "permissions": [
   "update_url": ""

The homepage_url and update_url entries are new. Of course, since I’m referring to an XML file at a particular URL I’d better create that file and host it at the URL. The format of the XML file is pretty simple; I just copied an example and replaced values with the right ones for my app:

<?xml version='1.0' encoding='UTF-8'?>
<gupdate xmlns='' protocol='2.0'>
  <app appid='mpcejinifahkdfhnfimbcckdllahpbmg'>
    <updatecheck codebase='' version='' />

I had to fill in the correct codebase value with the URL I am hosting the app file itself at, the version value with the current version number, and appid with the correct value.

appid? What’s that?

Chrome assigns a unique ID to every application it creates. That’s the value needed here. To see it I just looked at the Extensions panel and clicked the gray arrow to the right of my application’s icon, then I cut and pasted it to here.

After I rebuilt the extension with the new manifest (I had to fill in the Private key file name this time, matching the one created the first time I packaged the app) I uploaded the XML and CRX files to the URLs shown in the manifest and XML file. I also set the Content-type of the CRX file to application/x-chrome-extension, though that’s probably not needed given that the file name ends in .crx. I uninstalled my app to get a clean environment and visited with Chrome. I was asked whether to install the app. When I agreed, it installed and worked file.

So, does auto-updating work? I changed the version numbers in the manifest and XML file, packaged the new version, and uploaded the new XML and CRX files. And, a little later, the new version showed up in my browser. Success!

If you want to try this yourself, the entire application directory I used for this is available here. You’ll have to create your own XML file by copying the one above and changing the values as needed.

I could stop here but there is one Chrome app feature I want and don’t yet have: synchronizing extensions across different browsers. From my reading of the documentation that should be working now, but it isn’t. Either applications like mine are treated special and not synchronized, or else you need to publish in the Chrome Web Store to get this functionality. I’d like to explore how that store works anyway, so I’ll try it out next time and see if synchronization starts working.

« Previous PageNext Page »

Blog at