Charles Engelke’s Blog

November 9, 2013

AWS Linux AMI Administrative Accounts

Filed under: Uncategorized — Charles Engelke @ 11:36 am

It’s been nearly a year since I posted here. I should get back in the habit. Here’s a useful bit of information.

I want to create new EC2 Linux instances with separate administrative accounts for one or more specific people, for example: john, mary, rob, and sue. I don’t want to use a single shared ec2-user account or any shared SSH key pairs. So, for each of the usernames, I do the following (using “john” in the examples below):

1. Put the public keys of each key pair in a private S3 bucket at a known place, for example my_bucket/publickeys/john.pub

2. In my CloudFormation template, I specify an IAM role and policy for the new instance that has permission to read objects in that bucket with the common prefix:

{
"Action": ["s3:GetObject"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::my_bucket/publickeys/*"]
},

3. Add the following lines to the UserData start script:

adduser john
echo john ALL = NOPASSWD:ALL > /etc/sudoers.d/john

4. And add a new entry to the “files” section of the CloudFormation template:

"/home/john/.ssh/authorized_keys": {
"source": "https://s3.amazonaws.com/my_bucket/publickeys/john.pub",
"mode": "000600",
"owner": "john",
"group": "john"
},

Step 1 puts the user’s public key in a bucket for letting retrieval by the instance. Step 2 gives the new instance permission to fetch those public keys. Step 3 creates the account without a password for that user and gives the account the ability to use sudo without a password, just like the ec2-user account has. And Step 4 fetches the public key and puts it in the right place for ssh to find it and allow the user to log in with it.

I launch the instance with no specified key pair name, and now any of the desired users can ssh in to it with their own separate account and key pair, and there are no shared credentials. The ec2-user account still exists just in case there’s any need for it to own things, but you can’t log in to it.

December 28, 2012

Provisioning a Server with CloudFormation

Filed under: Uncategorized — Charles Engelke @ 8:47 pm

In my first post on AWS CloudFormation I talked about how to create a machine instance with specific properties. That’s very useful. But what I really like about CloudFormation is how it lets me declaratively provision my new server with necessary software, content of my own, and even running services. I’m going to cover that in this post. But fair warning: there’s a trick needed to make it work. I don’t feel that it’s clearly documented by Amazon, and it took me a while to figure it out. I’ll cover that near the end of this post.

I said that CloudFormation Resources contain Type and Properties keys. But they can optionally have another key: Metadata. Any metadata object defined here can be retrieved using the CloudFormation API. A new instance can retrieve the metadata, and, if it’s the right kind of object, provision the instance according to that specification. The “right kind of metdata” object for this is an AWS::CloudFormation::Init resource. I think you can declare that as another resource and reference it by name in the metadata, but for now we will just put it directly in the metadata. We defined our new server resource last time as:

"NewServer": {
    "Type": "AWS::EC2::Instance",
    "Properties": {
        "ImageId": "ami-1624987f",
        "InstanceType": "t1.micro",
        "KeyName": "cloudformation"
    }
}

Now we can add the needed Metadata property:

"NewServer": {
    "Type": "AWS::EC2::Instance",
    "Properties": {
        "ImageId": "ami-1624987f",
        "InstanceType": "t1.micro",
        "KeyName": "cloudformation"
    },
    "Metadata": {
        "AWS::CloudFormation::Init": {
            provisioning stuff goes here
        }
    }
}

What kind of things can you specify in the configuration? The full documentation is here, but let’s complete an example. We will provision a web server with static content that’s stored in an S3 object. That’s pretty simple provisioning: install the Apache web server using the yum package manager, fetch my zipped up content from S3 and expand it in the right place, and start the httpd service. Here’s an AWS::CloudFormation::Init object that will do all that:

"AWS::CloudFormation::Init": {
    "config": {
        "packages": {
            "yum": {
                "httpd": []
            }
        },
        "sources": {
            "/var/www/html": "https://s3.amazonaws.com/engelke/public/webcontent.zip"
        },
        "services": {
            "sysvinit": {
                "httpd": {
                    "enabled": "true",
                    "ensureRunning": "true"
                }
            }
        }
    }
}

It’s pretty obvious what most of this does. The packages key lets you specify a variety of package managers, and which packages each one should install. We’re using the yum manager to install httpd, the Apache web server. The empty list as the value of the httpd key is how you specify that you want the latest available version to be installed. You can also use the apt package manager, or Python’s easy_install or Ruby’s rubygems package managers here. The sources key gives a URL (or local file name) for a zip or tgz file containing content to fetch and install. The key (/var/www/html here) is the directory to expand the fetched file to. Finally, the services key lists the services to run on boot. The ensureRunning key value of true specifies that the service should start on every boot. The only tricky part of the services key is that it has one value, always called sysvinit, and that key has the actual services as its children. Putting this all together gives the following template:

{
    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Create and provision a web server",
    "Resources": {
        "NewServer": {
            "Type": "AWS::EC2::Instance",
            "Properties": {
                "ImageId": "ami-1624987f",
                "InstanceType": "t1.micro",
                "KeyName": "cloudformation"
            },
            "Metadata": {
                "AWS::CloudFormation::Init": {
                    "config": {
                        "packages": {
                            "yum": {
                                "httpd": []
                            }
                        },
                        "sources": {
                            "/var/www/html": "https://s3.amazonaws.com/engelke/public/webcontent.zip"
                        },
                        "services": {
                            "sysvinit": {
                                "httpd": {
                                    "enabled": "true",
                                    "ensureRunning": "true"
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}

I’ve made the zip file at that URL public, so you can copy this template and try to launch it yourself. Remember, you need to have created a key pair named cloudformation first. Did you try it? Did you notice that all this new stuff had no effect at all? The httpd package wasn’t installed, there is nothing at /var/www/html, and there’s no httpd service running. I had the hardest time figuring out what was wrong, but it turned out to be simple. The Amazon Linux AMI doesn’t do anything with this metadata automatically. You have to run a command as root to have it provision the instance according to the metadata:

/opt/aws/bin/cfn-init -s WebTest --region us-east-1 -r NewServer

The cfn-init utility is the program that understands the metadata and performs the steps it specifies, and the Linux AMI doesn’t run it automatically. If you log on to your new instance and run this command, though, it will do it all for you. You will have to replace WebTest in the command with whatever name you give the stack when you create it. If you’re running in a different region than us-east-1, change that part of the command, too. The -r NewServer option gives the name of the resource containing the metadata you want to use; we called that NewServer in the template above.

That’s nice, but not yet what we wanted. We want CloudFormation to handle the provisioning itself. To do that we have to get the new instance to run the cfn-init command for us when it first boots. And that’s what the UserData property of an instance lets us do. We can just put a simple shell script as the value of the UserData key to make that happen:

#!/bin/sh
/opt/aws/bin/cfn-init -s WebTest --region us-east-1 -r NewServer

Well, as you might guess, it’s not quite that simple. The value of the UserData key has to be a base 64 encoded string of this shell script. There’s a built-in CloudFormation function to base 64 encode a string, and we will use that:

UserData: {
    "Fn::Base64": "#!/bin/sh\n/opt/aws/bin/cfn-init -s WebTest --region us-east-1 -r NewServer\n"
    }

Note the \n characters to terminate each line. Put this in as a property of the server, giving the complete template:

{
    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Create and provision a web server",
    "Resources": {
        "NewServer": {
            "Type": "AWS::EC2::Instance",
            "Properties": {
                "ImageId": "ami-1624987f",
                "InstanceType": "t1.micro",
                "KeyName": "cloudformation",
                "UserData": {
                    "Fn::Base64": "#!/bin/sh\n/opt/aws/bin/cfn-init -s WebTest --region us-east-1 -r NewServer\n"
                }
            },
            "Metadata": {
                "AWS::CloudFormation::Init": {
                    "config": {
                        "packages": {
                            "yum": {
                                "httpd": []
                            }
                        },
                        "sources": {
                            "/var/www/html": "https://s3.amazonaws.com/engelke/public/webcontent.zip"
                        },
                        "services": {
                            "sysvinit": {
                                "httpd": {
                                    "enabled": "true",
                                    "ensureRunning": "true"
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}

If you create a stack called WebTest with this template, you should get a new instance already running the Apache web server, with a couple of pages of content installed and already available. Give it a try. For me, at least, this was a success!

There are still a lot of rough edges. What if you don’t want to call your new stack WebTest? What if you want to run it in other regions? How about dealing with protected resources? Creating resources that interact with each other? Getting better reports of how to access resources that are created? Letting the user specify parameters to control the stack? I’ll cover some of that in future posts.

December 27, 2012

Windows Printing to an Airport Extreme Connected Printer

Filed under: Uncategorized — Charles Engelke @ 11:34 am

Want to print from your Windows 7 PC to a USB printer connected to Apple’s Airport Express? Well, you can do what Apple says:

  1. Install Apple’s Bonjour for Windows
  2. Run the Bonjour Printing Wizard, answering its questions one by one
  3. Print!

And that works. At least it did for me. For some definition of “works”:

  • It showed the correct printer, but selected a driver for a different printer (that didn’t work at all)
  • It was easy to switch to the right driver, which worked
  • But it would only print black-and-white to my color laser printer
  • And would only print one job. Subsequent print jobs from the same or any other PC or Mac did nothing until you turned the printer off and back on.

Or, you could do what ended up working for me. The key points of my solution are:

  • Do not install any Apple software on your Windows PC
  • Do not pay any attention to anything Apple says regarding printing from your Windows PC

Instead, just use the regular Windows 7 Install Printer wizard. There are a lot of steps, but they’re easy.

  1. Select Devices and Printers from the Windows Start menu
  2. Click Add a Printer
  3. Select the Add a local printer option (yes, it’s not local, but that’s Microsoft for you)
  4. Click Create a new port, and select Standard TCP/IP Port from the drop-down list, and click Next
  5. Fill in the Hostname or IP address with the address of your Airport Extreme router. That’s probably 10.0.1.1, but you can check it by running the ipconfig command from a command prompt and looking for the Wireless LAN’s Default Gateway address. Leave Port name at whatever it fills in, uncheck Query the printer and automatically select the driver to use, and click Next.
  6. The wizard will say it’s Detecting the TCP/IP port. It should find the device. If not, you probably entered the wrong IP address. Check it and try again. If it still fails to detect it, don’t worry about it and continue anyway.
  7. Select Network Print Server (1 Port – USB) from the Standard Device Type list. The default Generic Network Card would probably work okay, but I didn’t try it. Click Next.
  8. Select your printer’s Manufacturer from the list, then select your specific printer from the Printers list, then click Next. If your printer isn’t there, you’ll have to download a driver and use the Have Disk… option.
  9. Fill in a Printer name, or leave the name it fills in for you alone. Click Next.
  10. Decide whether to share the printer or not. Since other devices on your network can print directly to the Airport Extreme, why bother to share it? I selected Do not share this printer and clicked Next.
  11. Decide whether to Set as the default printer, and try to Print a test page, then click Finish.

This worked for me on two different Windows 7 PCs. They now print in color, and jobs submitted after they print also print.

December 19, 2012

Learning about AWS CloudFormation

Filed under: Uncategorized — Charles Engelke @ 5:05 pm
Tags: , , ,

I’ve been using Amazon Web Services as the infrastructure for some products for a while now. A big advantage of running in the cloud is being able to automate creating, updating, and destroying servers. So far, we’ve been doing this by writing scripts. Now it’s time to move up to the next level of sophistication and use their CloudFormation service instead. That not only supports automated launching and provisioning an new servers, it supports automatically creating a whole bunch of interconnected services all at once. And it’s declarative, specifying where you want to end up, instead of procedural, specifying how to get there.

CloudFormation looks pretty simple at first, but I’ve found out that it really isn’t. You need to handle a lot of details, and the documentation isn’t always clear (to me), nor even always complete (as far as I can tell). And there aren’t enough complete examples. So, as I learn about it I’m going to blog about what I discover.

I’ll start with the simplest case I need handled: launching and provisioning a single server. I have to write a template specifying what I want CloudFormation to do, and then use CloudFormation to create the stack defined by that template.

CloudFormation templates are documented in the Working With Templates section of the AWS CloudFormation User Guide. Each template is a JSON document representing an object (basically, key/value pairs). The general format of that JSON object is:

{
   "key1": "value1",
   "key2": "value2",
   ...
   "keyN": "valueN"
}

The order of the key/value pairs is irrelevant. Another JSON document with the same pairs in a different order would be considered to represent the same object. Note that the keys are quoted strings, separated from the values with a colon. Key/value pairs are separated (not terminated) with commas. And values can be quoted strings, as shown here, or numbers, or JSON objects themselves. They can also be arrays, which are comma-separated lists of values enclosed in square brackets. Don’t worry too much about the details; we’ll see all of this in the examples.

CloudFormation template are JSON objects with some of the following keys:

  • AWSTemplateFormatVersion
  • Description
  • Parameters
  • Mappings
  • Resources
  • Outputs

All of these keys are optional except for Resources. Resources are the things that CloudFormation is going to create for you, so if there are none, there’s not much point to having a template anyway.

Although the AWSTemplateFormatVersion is optional, and there’s only ever been one version declared so far, I’m always going to include it. The only legal value for it is “2010-09-09″. The Description is also optional, but again, I’ll always include it to help me keep track of what I’m trying to do. So my template is going to start taking shape:

{
    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Create a basic Linux machine",
    "Resources": {
        something needs to go here!
    }
}

I need to fill in the Resources section with at least one key/value pair. The key is going to be my logical name for the resource. It can be just about anything (I haven’t pushed the limits though), because CloudFormation doesn’t care. I’ll just call this NewServer. The value is always an object with Type and Properties keys. The possible types are listed in the Template Reference section of the User Guide. To create an EC2 instance, use a Type of AWS::EC2::Instance.

The Properties object contains different possible keys for different resource types. The possible keys for AWS::EC2::Instance are listed in that section of the Template Reference in the User Guide. Only two keys are required: ImageId, which is the ID of the AMI to use for the new instance, and InstanceType, which tells what kind of instance to launch. Actually, in my experience I’ve found I can omit the InstanceType and it defaults to m1.small, but that may be a bug, not a real feature. The documentation says InstanceType is required, so I always include it.

I want to launch a standard 64-bit, EBS-Backed Amazon Linux instance in the US-East-1 region. According to the Amazon Linux AMI web page, the ImageId is ami-1624987f. I’ll save money by using a t1.micro instance. Putting all this together, I get the following template:

{
    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Create a basic Linux machine",
    "Resources": {
"NewServer": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-1624987f",
"InstanceType": "t1.micro"
} }
    }
}

Now to create this stack. Log in to the AWS Management Console and select CloudFormation. (If you’ve never used it before, you’ll be walked through a few sign-up steps to verify your identity. A few minutes later, you’ll be able to use the console.) It currently looks like this:

Image

I made sure I was in the right region (N.Virginia showing in the upper right corner), clicked Create New Stack, then filled in the blanks. I put my template in a file called cf.json, and selected it for upload:

Image

Then I clicked Continue. I had the option to enter some tags, which would be applied to the stack and to every resource it created. I just clicked Continue. Finally, I had a confirmation box:

Image

I clicked Continue, and my stack started building. I closed the acknowledgment window and looked at the console. The upper part showed all my stacks. There was only the one I just created. When a stack is selected, the bottom part shows its properties. I selected the Events tab for the screen capture below:

Image

Eventually, CloudFormation finishes, either successfully or with an error. In that latter case, it will usually roll back all the steps it took automatically. Otherwise you can click Delete Stack to get rid of everything it created.

In this case, everything worked. The Resources tab lists everything that was created. That’s just the NewServer resource, which is an AWS::EC2::Instance. It also shows me the ID of that instance. If I want to log in to that server I’ll have to look up its address in the EC2 section of the console. However, I’m not going to have much luck with that because I did not specify a key pair when creating the machine, so it’s impossible for anyone to connect via ssh.

Oops.

KeyName was an optional property I could have specified, but didn’t. The reason it’s optional is that you very well might want to create an instance nobody could log in to. That’s not true in our case, so I fixed it. First, I cleaned up the stack I created that I can’t use. I selected it in the console and clicked Delete Stack. The stack and every resource it created was destroyed. Next, I went back to the template and specified a KeyName value. It had to already exist as a Key Pair in the US-East-1 region. I happened to have one there named cloudformation, so I used it. The updated template:

{
    "AWSTemplateFormatVersion": "2010-09-09",
    "Description": "Create a basic Linux machine",
    "Resources": {
"NewServer": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId": "ami-1624987f",
"InstanceType": "t1.micro",
"KeyName": "cloudformation"
} }
    }
}

Repeating the steps above I got a running Linux machine. This time, that machine was associated with the cloudformation key pair, so I could  log in via ssh. Success!

Instead of the console, I could have used the cfn-create-stack command line tool. Or I could have written a program that invoked the REST API for CloudFormation. Each method looks about the same to AWS, and gets the same result.

But what’s the point? I could have created this instance directly with the EC2 console, or command line tools, or REST API. And it would have been at least as easy. Easier, in fact, in my opinion. That’s because I haven’t tapped into the real powers of CloudFormation yet:

  • Provision created servers with specified packages, files, software, etc.
  • Create (and manage) multiple resources that work together

I’ll get started on those more useful, and more interesting, things in my next post. But before I go, I’d better remember to go back to the CloudFormation console and delete the stack I created, so I don’t keep paying for that server.

September 23, 2012

Peter Bell’s talk on Next Generation Web Apps using Backbone.js at #StrangeLoop

Filed under: Uncategorized — Charles Engelke @ 5:47 pm

I don’t expect to have as many notes here as at my last session, because I’ll be trying to code the examples as we go. Also, the conference network is completely worthless; I’m using a Verizon MiFi, but my PC keeps dropping the connection (probably because the Apple device doesn’t like talking to a Samsung one).

We’re starting with an overview of all the well-known JavaScript MVC-ish frameworks. There are a lot of them. But at this point, I want to learn about Backbone, not frameworks in general. And we eventually get there.

We start with routers, which tell which JavaScript function should be invoked for various URLs. For example, the view for “about” would be a specific function that would be invoked when the URL ended in “#about”. We move on to views and models.

After some general overview, we start working on an example, the ToDoMVC app from Addy Osmani. At which point we start looking at tiny text in the presenter’s editor, as he tries to find his way around the example.

And, we’re at the break, and I’m leaving. This talk has been disjointed and confusing; I can browse through the example code by myself. Maybe I can sneak into the second half of another session.

Neal Ford’s Presentation Patterns talk at #StrangeLoop

Filed under: Uncategorized — Charles Engelke @ 4:00 pm

Today’s the workshops day at Strange Loop 2012, and I’m starting out with Neal Ford’s talk. I give a lot of presentations, and can always use help making them better. We’re getting a late start because the other first day activity – the Emerging Languages Workshop – ran a bit late in the morning, so the optional lunch for us workshoppers ran a bit late, too.

While we’re waiting, he showed us a PowerPoint file he uses as a “projector sanity check”, showing how it handles each color, clipping edges, different contrast ratios and a check for dead pixels. That’s going to be useful.

He focuses on some antipatterns that we should avoid.

Antipattern: Bullet-riddled corpse

Put up a bullet list, and everybody will read it right away. And then you’ll have to cover the material again and get them to pay attention to all things you’re saying that weren’t in the bullet points.

Antipattern: Floodmarks

These are like watermarks, but there are just so many of them. Trademarks, icons, and so on drowning out your presentation. This often happens when a conference requires you to use their templates. Ford says to fight this. Which he did by submitting a slide deck that complied with their template, but then ran his real deck off his own laptop.  And then won an award for the best presentation at the conference.

Floodmarks are okay on the first and last slides, but all the others should be blank canvases. And don’t put your company name on every slide, nor the the copyright notice except for the first slide. They’re just “noise”.

Infodeck versus Presentation

Infodecks and presentations look alike, but they’re totally different. An infodeck is static, while a presentation uses time through transition (moving between slides) and animation (movement within a slide). You standing in front of an infodeck isn’t adding value. An infodeck is like an essay, but presented in slides instead of paragraphs.

Pattern: Know your audience

Anticipate the questions they’ll have and put the answers into your presentation.

Pattern: Have a narrative arc

Just like telling a story: introduction and exposition, complication, climax, resolution. There may be several “subplots” each with it’s own narrative arc in your overall story.

How showed a tiny “slide sorter” view where you couldn’t see the slides, but he marked which were showing the problems and which were showing solutions, illustrating the narrative arc.

Pattern: Brain breaks

Every ten minutes or so people’s attention tends to lag, so you need a break to bring them back. Humor, violence, or sex all do that. Don’t use sex or violence in a technical talk! So put a bit of humor in every ten minutes or so.

Pattern: Unifying visual theme

Tie everything together implicitly this way.

Antipattern: Alienating artifact

Don’t try to get attention in ways that will alienate part of your audience. Sex and bigotry are good ways to do that.

Pattern: Fourthought

A pun on forethought. There are four parts: ideate (he uses mind maps), capture (in some concrete form), organize (get them into an outline, either in your presentation tool or – he uses – externally), and design (render into your presentation tool).

Pattern: Lightning talk

A short talk (usually timeboxed to five minutes or less), sometimes with a fixed format of slides. He’s going to have us do that as an exercise. Make sure it has a narrative arc, feel free to use other patterns.

Pattern: Intermezzi

A bridge between two pieces.

Antipattern: Cookie cutter

Some (most) ideas fit on more than one slide, but because slides are the “atoms” of your tool, you tend to try to fit an idea onto a single side. But more slides cost nothing, so get over over that.

Note that the “infodeck” concept wants you to use fewer slides, but for a “presentation” more slides have no downside.

Note: auto-size text is evil. Don’t let the tool encourage you to cram more things on one slide.

Antipattern: Hard transitions

One wall of text gets immediately replaced with another wall of text. The alternative?

Pattern: Soft transitions

Have a fixed element, with varying other elements that come and go. Dissolves are one way to do that. But using no transitions forces a choppy narrative, while soft transitions all you to control the flow. He also calls this a “charred trail”. Title comes up alone in the middle of the screen, then moves to the top with points coming below it, dissolving as each new one comes in. He calls this exuberant title top plus charred trail. They can print well, too.

Aside: every few minutes he brings in slides from his Halloween parties and contests. Brain breaks in practice, and an example of…

Pattern: Vacation photos

Use full-screen, high-quality images and few or no words, so long as they are relevant to your theme.

Antipattern: Slideument

That is, a slide plus a document. Try to have one deck be an infodeck and presentation slides. There are patterns that can make this less bad, but it’s still a bad idea.

Pattern: Context keeper

Example: a visual element for that context, that’s included in each slide that talks about that context. His example was “litmus tests” that he showed with actual test strips, which he moved around the corners of different parts of each slide taking about his metaphorical litmus tests.

Another example is backtracking. Have a slide introducing something, then a different slide illustrating it, then back to the first, but expanded to have more of the idea.

Antipattern: Demonstrations versus Presentations, and Live Demo versus Dead Demo

Most of the time live coding is primarily ego gratification for the presenter. Not always, of course. Tutorials and product demos use live coding well. But in a technical talk, doing all that typing is just noise.

So, tutorials good, technical deep dive bad. Product demos good, exuberant tool interaction bad. Hands-on classes good, time consuming tasks bad.

There are ways to get the benefits of live coding without doing it.

Pattern: Traveling highlights

Show code as a screen shot (with syntax highlighting) not as text. Then highlight the part you want to show, one after another. You get the kind of motion live coding gives you, without the distracting mechanics of it. Can use a colored background, or reduce the contrast of the rest of the code.

Use the screen shot even if you can get syntax highlighting and coloring in your own tool, because you don’t want the temptation to edit anything inside the presentation tool. You’ll make a mistake and not catch it.

Another option is to capture a movie of the dynamic stuff you would otherwise do live, and show that as a video in your presentation. He calls this lipsync. But don’t use this to fake anything; let the audience know it’s recorded.

Pattern: Invisibility

If you want everybody to look at you for a minute, so you can make a point, use a black slide.

Antipattern: Stale content

Leave a slide up after you start talking about something else. If you don’t want another slide for the next point, use the invisibility pattern.

There was a lot more content, too much for me to note here. And we did an exercise where we created lightning talks that was great. I really recommend his ideas. I haven’t yet read his new book, Presentation Patterns: Techniques for Crafting Better Presentations, but I feel comfortable recommending it. I got a copy as part of the talk, and will be cracking it open tonight.

September 17, 2012

AES Encryption with OpenSSL command line

Filed under: Uncategorized — Charles Engelke @ 5:03 pm

I know I’m going to forget this command line, so I’m documenting it here.

To use AES with a 128 bit key in CBC (cipher block chaining) mode to encrypt the file plaintext with key key and initialization vector iv, saving the result in the file ciphertext:

openssl aes-128-cbc -K key -iv iv -e -in plaintext -out ciphertext

To decrypt, change that -e to -d.

Warning: the values of the key and the iv must be typed in hex.

June 24, 2012

Switching to Mac?

Filed under: Uncategorized — Charles Engelke @ 1:58 pm
Tags: , , ,

I’ve been a PC user for nearly 30 years, first with MS-DOS and later with Windows. I’ve been very happy with them, though ever since Apple switched the Mac operating system to be Unix-y I’ve thought I might prefer that. A stint with the iPhone soured me on Apple, though, and I thought little more about it.

Now my Incubator group has started developing mobile apps. We did Android prototypes first and are getting good responses. But we clearly need to support iOS, too, for any products we actually release. So we got a Mac Mini at work to develop with, and I decided to buy a cheap MacBook Air to get familiar with the environment. (Well, not that cheap… I ended up with the 13″ box with 8GB of memory and a 256GB drive instead of the entry level one.)

I’ve been using the Mac for almost a week now, and am really liking it. So, am I switching?

Maybe. There’s a lot to like about it, and little on the negative side.

The good:

  • Boy, this thing is fast. Though, to be fair, a similar Windows laptop with plenty of memory and SSD probably would be just about the same. I think my experience this week is the death knell for spinning drives on any laptop I own from now on.
  • Very portable. The 13″ box is a lot bigger than I expected. Still, it’s plenty small enough to take everywhere I travel.
  • Great for software development in my preferred target environments (Linux, web, and mobile). Ruby and even Perl don’t support Windows nearly as well. It’s even better (so far) for Android development! I was able to connect my Samsung Galaxy Nexus phone and run it in debugging mode in the first try; that’s yet to work on Windows due to the Samsung USB drivers.
  • Awesome trackpad. Everybody says so, and they’re right.

The bad:

  • Lousy keyboard. Yes, most Windows laptop keyboards are worse than this, but I always use ThinkPads, and their keyboards are much, much better than this.
  • Missing keys. I want Home, End, and Delete keys! And I’d like Page Up and Page Down, too. I’m slowly learning the various keyboard shortcuts, but those dedicated keys are very useful, especially for coding.
  • No TrackPoint pointing device. It’s not the most popular option, but it’s by far the best. Yes, even better than the trackpad. (Though having both would be awesome!)
  • No documentation. I knew there were virtual desktops available, but I had no idea they were called Spaces. And even when I figured that out, how was I supposed to know that you get them by hitting F3 and pointing to the upper right corner of the screen? And that the Apache web server configuration is in /private/etc? Thank God for Google, or I’d be lost.
  • Dongles. I’m going to need to buy some if I ever want to connect a monitor or a wired network. I don’t like that, even though I kind of understand it in an ultra-portable like this.
  • Text editor. I have tried a couple on the Mac, and haven’t found one I like much yet. They all seem to de-emphasize the keyboard, which I prefer to use, especially for selecting and moving text. And the one I like best so far (Sublime Text 2) has inadequate documentation.

The same:

  • The hardware is equally good on both sides (I’m comparing the MacBook Air to the ThinkPad X series here).
  • They both have good, though not great, battery life (I can go a whole business day on battery with either, so long as I let it sleep when not in direct use).
  • I think that the matte display on the ThinkPad is actually better, but the glossy one on the Air has more initial impact.
  • Almost all the software I use on Windows I’m now using on the Mac. It’s all no-cost in both places, too.

So, am I switching? It seems likely. I’m going to take the Air with me on a two-week trip and see how I get along without having Windows ready if I need it.

June 6, 2012

Grayed out

Filed under: Uncategorized — Charles Engelke @ 9:48 am

Why is light gray text becoming so popular on the web? More and more web pages look like copies from a machine that has run out of toner. If you want your text to be read, make it readable. The best web designers put readability ahead of appearance.

Just consider this snippet from enyojs.com:

Image

Isn’t it better with black text?

Image

And don’t get me started on gray text on gray backgrounds, or trendy fonts that display on monitors with single pixel-wide strokes, or tiny font sizes in wide viewports.

Please, don’t make me open the browser’s debugger and edit your CSS to make your text readable. 

May 29, 2012

Fluentconf workshop: Backbone.js Basics and Beyond

Filed under: Uncategorized — Charles Engelke @ 7:29 pm

Unlike my first workshop today, my second workshop at FluentConf covers a subject completely new to me:  Backbone.js. I’ve heard a lot about it, but never even downloaded it. Looking forward to learning a lot.

“Backbone thinks of itself as being lightweight.” It isn’t opinionated like Ruby on Rails, so Backbone projects can do the same things in very different ways. She’s going to show her ideas of the best way, but our ideas may vary.

Backbone is not MVC, even though parts of it have the same names as in server-side MVC frameworks (Models and Views). Backbone adds Templates to those two, not controllers.

The speaker came to JavaScript through Rails. At the time that meant that Rails wrote her JavaScript; she didn’t have to. Now she feels that is kind of like using scaffolding – a shortcut that won’t carry you far enough. Next, she used jQuery extensively. That’s powerful, but can be messy and hard to test other than with something like Selenium. Phase 3 was page objects. Create a unit testable object that has the JavaScript for the page. That seems to describe how she uses Backbone.

Backbone gives you model mirroring, views that handle events (and can render DOM). Models in Backbone are like MVC models and may mirror server-side ones (or something like them rather than one-to-one). Server-side views correspond to Backbone Templates. Server-side controllers correspond to Backbone Views.

The talk covers various tasks you need to perform, and how to do them with Backbone, ending with how it all fits together. I wish that had come first. Maybe it’s me, but I need the overall context to be comfortable with the pieces. Basically, set it all up by creating an app object with an initialize method that you call when your document is ready. That can set up the model, fetch the data, and use a view to render it.

Testing? Pivotal uses Jasmine, and there’s a talk about it tomorrow at 1:45.

Backbone is really good at interacting with a RESTful API, living in harmony with other frameworks and styles of JavaScript, and handling unique applications (due to its flexibility). On the other hand, it doesn’t have UI widgets, and it’s not good for developers who aren’t already strong in JavaScript (because it doesn’t give enough direction to them).

The talk is over very early. And all in all, I’m disappointed. I go to a half day workshop expecting to come away ready to actually create something with my new knowledge, not just get a survey of the topic. I could have learned as much about Backbone in a 30 minute talk as in this workshop.

Fluentconf workshop: Breaking HTML5 Limits on Mobile JavaScript

Filed under: Uncategorized — Charles Engelke @ 3:28 pm
Tags: , , ,

O’Reilly’s Fluent Conference starts today with optional workshops. My morning selection is on JavaScript on mobile platforms, given by Maximiliano Firtman of ITMaster Professional Training. This post is just a stream-of-consciousness list of points I want to remember, rather than real notes for the talk.

In his introduction, he points to a resource on available APIs: www.mobilehtml5.org.

Mobile web development is different:

  • Slower networks
  • Different browsing (touch versus mouse, pinch to zoom, pop-up keyboard, etc.)
  • Different behavior (only current tab is running, file uploads and downloads)
  • Some browsers are proxy based (Kindle Fire, Opera Mini)
  • too many browsers (more than 40), some too limited, some too innovative, mostly without documentation, mostly unnamed, most without debugging tools
  • Four big rendering engines, five big execution engines

Check gs.statcounter.com for browser market shares. Much more even distributed among top seven dozen browsers.

Web views embed an HTML window in a native app. On iOS, web views have a different execution engine than the browser (2.5 times slower!). They often have differences in how they support HTML5 APIs.

Pseudo-browsers (his term) are native apps with a web view inside. They you don’t get a new rendering engine or execution engine, you just get new behaviors added by the native shell it is wrapped in. (Yahoo Axis, for example)

(Note to me: he’s using IPEVO P2V for Point2View cameras showing a mobile phone on a camera.)

Phonegap and similar tools for creating native apps use web tools but are native.

Remote debugging is available for some browsers with Remote Web Inspector. Adobe Shadow is a new debugging tool that’s free (at least, for now). Weinre can work with Chrome, making iPhone remotely debuggable. Looks pretty interesting.

Paper Who Killed My Battery from WWW2012 shows how different web sites consume power from your device’s battery. For example, 17% of the energy used to look at Amazon’s web site is for parsing JavaScript that isn’t ever used.

The speaker has a neat development tool he calls Chevron for working inside the browser, available at firt.mobi/material/FluentConf.zip. It has an in-browser code editor, and can save on-line to a unique URL. It will display a QR-code for that URL, so you can see what you’re developing on your mobile device as well as the built-in browser window. Very nice.

A service at www.blaze.io/mobile will run your public web page on a real device of your choice, and give you performance metrics on it.

You can build a real app (even offline) in the browser with HTML5, but it doesn’t look native on a mobile device. But (for Apple and maybe others) you can get it a lot closer with some meta tags:

<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black">
<link rel="apple-touch-startup-image" href="launch.png">

A lot of the second half of this talk is more on HTML5 in general (when it works in mobile browsers, too) than specific mobile issues. Most of the audience is finding this very useful, but it’s not new to me. Unfortunately, it doesn’t seem that he’s going to get to the Device Interaction part of his demonstrations, which I would really like to see. I can always fiddle with them myself later, I guess. But he’s a good speaker and I’d like to hear him talk about them.

You can use the orientationchange event (onorientationchange property) to run code when the device moves between portrait and landscape views. You can also check for going on and off-line with the online event (though this is not generally reliable).

Ah, he’s getting to Device Interaction! Geolocation first, which is neat but has been available for a while. But then a lot of really new capabilities, some of which only run on one or two browsers now. I need to start using Firefox on my Android phone.

A very useful talk and good kickoff to the conference for me.

March 10, 2012

Dehydration and popping ears

Filed under: Uncategorized — Charles Engelke @ 10:52 pm

A few days ago I was taking a tour of Corcovado National Park in Costa Rica and I noticed that my hearing was muffled. I don’t hear that well normally, but this felt like I was wearing earplugs. That sometimes happens when I fly, and I just yawn to make my ears pop and the problem goes away.

That didn’t work this time. My ears stayed muffled through the whole day out. But after the tour and the long boat ride back, they finally popped and my hearing came back to normal during the drive back to my hotel.

What happened? I was severely dehydrated during the tour (the tour operator did not bring nearly enough water for such a hot day) and I picked up several liters of bottled water on the drive home. Just as I was finishing the first liter, my ears popped and my hearing came back. I searched for information on this condition and found a lot of pages saying that heavy exercise can cause it and cooling down will make your ears pop again. But I had several hours sitting on the boat after the exercise and my ears did not pop until I got a lot of water in me. I’m certain that this was caused by dehydration.

I had something similar happen at work a few months ago, and I now think it was also due to dehydration. My doctor had me nearly eliminate caffeinated and carbonated drinks and I hadn’t yet got used to making up for it with a lot more water. Looking back, I think the dehydration affected my Eustachian tubes. Clearly, I have to pay more attention to getting enough to drink.

February 28, 2012

mod_perl Problems

Filed under: Uncategorized — Charles Engelke @ 2:04 pm

I’ve just spent days trying to get mod_perl to work with Perl 5.12 or later, and it’s finally there on both Windows and Linux. I may post more detailed notes, but before I forget, here’s an important note to me.

The mod_perl.so file I needed for ActiveState Perl 5.12 and Apache 2.2 can be downloaded from cpan.uwinnipeg.ca/PPMPackages/12xx/x86/. Specifically, cpan.uwinnipeg.ca/PPMPackages/12xx/x86/mod_perl.so .

I found a bunch of other downloadable binary versions of this file, but none of them worked with my 32-bit Windows Apache and ActiveState Perl 5.12. This one did.

I haven’t found any that work with Perl 5.14 for Windows.

January 8, 2012

Chrome Web App Bookshelf – Part 7 of 7

Filed under: Uncategorized — Charles Engelke @ 1:36 pm

Note: this is part 7 (the final part) of the Bookshelf Project I’m working on. Part 1 was a Chrome app “Hello, World” equivalent, part 2 added basic functionality, part 3 finally called an Amazon web service, part 4 parsed the web service result, part 5 was useful enough to publish, and part 6 covered publishing it at my own website. This post finishes the series by publishing in the Chrome Web Store instead.

I’m almost done with getting my app out into the world now. I just have to put in the Chrome Web Store. Once that’s done I intend to update it with new features from time to time, but probably won’t post about that in any detail. Instead, I’ve put this project on Github. If you’re interested, you can follow its development there.

Following my practice to date I haven’t bothered to read any of the documentation about the store. Instead I just looked for information on how to develop for it, starting with the Settings icon in the upper right corner of the page:

Settings icon in Chrome Web Store

When I clicked on the gear icon the drop-down menu showed a choice for Developer Dashboard, so I chose that. The resulting page looks like it’s going to guide me through the process pretty easily. There’s a link to “Start uploading your apps now!“. Seems promising…

Developer dashboard section to Upload your app

It sure looks easy. I’m not supposed to upload the CRX file, just a ZIP of the directory. I just posted such a file for my previous entry. I’m a bit worried because I put in an auto-update URL that doesn’t make sense for the app store, so I’m going to remove both the homepage and update URLs from the manifest before uploading to see what happens. I’m also going to increment the version number to reduce my own confusion.

When I uploaded the resulting ZIP file I got a page showing how the web store sees it, starting with the icon:

App summary with placeholder icon

Where’s my icon? A bit further down the page I’m offered the chance to upload another icon, saying it should be 96 pixels square. But instead I just uploaded my 128 by 128 icon again, and it took it and it looks good:

Chrome store summary showing my icon

Going down the page, I’m asked for a detailed description, so I filled it in as follows:

Want to keep track of books that haven’t been published yet, so you can decide whether to buy them when they’re ready? This app allows you to add books by ISBN or Amazon’s ASIN (including Kindle books) and keep a list showing their scheduled release date and shipping status. When you’re ready to buy, just follow the link from the app.

This app uses Amazon’s Product Advertising API, so you will need an Amazon Web Services account to use it. Accounts are free to register, and this particular API incurs no charges.

Next, I’m asked for a screen shot and at least one “promotional image”. Huh. Okay, I’ll make them up. My promotional image is just a large version of the icon on a dark background. After I filled in the rest of the form as best I could, I saved the draft, returning to the dashboard:

Dashboard showing current status with publish link

Okay, let’s try it. I pressed Publish. And got a confirmation:

Publish confirmation image

Okay, let’s try it. And… well, I was kind of expecting this:

Pay $5 now

I need to pay this once to register. That’s not much, so I went ahead and paid it through Google. I then had to click Publish again, and the listing now shows an option for Unpublish. I guess I’m done. When I click the link showing for the item, it looks like I’ve got it in the store!

My app in the Chrome Web Store

And when I click to install it, it first shows me the permissions it requires:

Install confirmation

And it installed it just fine. I’ve got two nearly identical apps showing now, the previously packaged one and the new one from the store. Chrome thinks they are different because they have different unique IDs. They don’t share storage either. I’ll leave the old app there for a while, but don’t intend to update it.

Will this make the app sync to each of my browsers? It hasn’t as of this writing, but I’ll give it some time. Enhancements to this app will be posted on my Github repository, and published in the Chrome Web Store for anyone that wants to keep following it.

[Update a few hours later: the extension did finally sync to my other computers! The data did not, which I expected. I can either do that with some other facility (perhaps AWS SimpleDB) or wait until Chrome adds an API for that, which I think is in the works.]

January 7, 2012

Chrome Web App Bookshelf – Part 6

Filed under: Uncategorized — Charles Engelke @ 3:52 pm

Note: this is part 6 of the Bookshelf Project I’m working on. Part 1 was a Chrome app “Hello, World” equivalent, part 2 added basic functionality, part 3 finally called an Amazon web service, part 4 parsed the web service result, and part 5 actually was somewhat useful. The series is almost done now. This post will cover packaging and privately publishing the app, and the next and final post will cover putting it in the Chrome Web Store.

There’s more functionality I want to add to the app eventually (updating the data on saved books, deleting books from the list, and even synchronizing the list of books between PCs) but the purpose of this series is to show how to create and publish Chrome web apps. My app is just barely functional enough now to publish, so I’m going to go ahead and do that.

Since my app contains software and images from others I’m going to have to add some acknowledgment of that fact. I want to be sure I’m complying with the license conditions when I distribute those pieces, and I should also specify whatever the license conditions are of my app. So I added a file called LICENSE to my project directory, spelling out my license terms. You can see the current version of that file here. As you can see, I chose the MIT license for my app because I feel that’s one that least encumbers the users.

One licensing issue I encountered was that the Stanford JavaScript Crypto Library includes patented code, and the conditions of its use apparently require either purchasing a license to the patent or using only the GPL, at least in the United States. I’m not a lawyer so I might not be understanding this clearly, but I don’t want to violate the terms or intent of that patent holder. Other than that issue the library can be licensed under the BSD license, which seems to be compatible with the MIT license I chose. That library includes tools to build subsets of the whole thing, so that’s what I did. The patented code is used in cipher algorithms, so I built a library without ciphers (in fact, it has the minimum functionality that I need) and am including only that. I believe that means I’m fine distributing it as part of my MIT licensed app.

I also added copyright notices to the files I created: main.html, main.js, aws.js, and main.css. I don’t think that the manifest file is actually a creative work, so didn’t put any copyright notice in it. I don’t even know where it could have gone had I wanted to add one.

I know a lot of people think that explicitly specifying copyright and license conditions aren’t really necessary unless you want to restrict use of your work, but as someone who builds commercial software for a living I can tell you that they’re very important even if you don’t want to restrict that use. Without clear indication of the conditions, nobody would risk reusing what you created in any professional or commercial endeavor.

Okay, the app has been slightly polished up, is (just barely) functional enough to actually use, and has clear claims and acknowledgment of ownership and licensing. I’m ready to publish it. But how?

I started by recognizing that once I publish it I will want to update it at times. Every update should have a higher version number. So far I’ve left that number as “1″ in the manifest, but for publishing I’ll take advantage of the fact that Chrome allows that version number to be up to four integers, separated by periods, and start over at version “0.0.0.1″. With that done, I can package the app using any running copy of the Chrome web browser.

First, open the Extensions panel by choosing Tools/Extensions from the drop-down menu you get when you click the wrench icon in the upper right hand corner of the browser. When I do that, I see the unpacked version of the app I have been working on:

Extensions panel showing the unpacked app

When I click the Pack extension… button I get the following dialog box:

Pack Extension... dialog box

I enter the directory I have been working in, leave the Private key file field empty, and click Pack Extension. Chrome tells me what it has done now:

Message shown after packing

It created a Chrome extension file (ending in .crx) and a new key file for me to use (ending in .pem), and told me where they are. Next I opened the file in Chrome by dragging and dropping on to the browser, and was asked whether to install it. After I said yes, the Extensions panel showed the app twice:

Extensions panel showing two versions

The top one is the unpacked app I’ve been working on, and the lower one is the actual packaged application. I then removed the unpacked version and tried out the packed one. It worked!

But it’s still not really ready. If I create a new version there’s no way that Chrome will know about it. If I host the app on a web site I can configure the manifest so that Chrome will regularly check for updates and install them if they exist. But to make that happen I have to also create an XML file describing the current version of the app. I guess Chrome doesn’t want to have to download a whole app just to see if it’s updated, and prefers a small XML file for that. I have to put the URL for that XML file in the manifest. While I’m was at it I also added the URL of a home page for more information about the app and incremented the version number. The manifest file now looked like:

{
   "name": "Books to Buy",
   "description": "Keep a list of books to buy on Amazon, with their price and availability",
   "version": "0.0.0.2",
   "app": {
      "launch": {
         "local_path": "main.html"
      }
   },
   "icons": {
      "16":    "icon_16.png",
      "128":   "icon_128.png"
   },
   "homepage_url": "http://www.bibliote.ch/",
   "permissions": [
      "https://webservices.amazon.com/*"
   ],
   "update_url": "http://www.bibliote.ch/files/bookshelf.xml"
}

The homepage_url and update_url entries are new. Of course, since I’m referring to an XML file at a particular URL I’d better create that file and host it at the URL. The format of the XML file is pretty simple; I just copied an example and replaced values with the right ones for my app:

<?xml version='1.0' encoding='UTF-8'?>
<gupdate xmlns='http://www.google.com/update2/response' protocol='2.0'>
  <app appid='mpcejinifahkdfhnfimbcckdllahpbmg'>
    <updatecheck codebase='http://www.bibliote.ch/files/bookshelf.crx' version='0.0.0.2' />
  </app>
</gupdate>

I had to fill in the correct codebase value with the URL I am hosting the app file itself at, the version value with the current version number, and appid with the correct value.

appid? What’s that?

Chrome assigns a unique ID to every application it creates. That’s the value needed here. To see it I just looked at the Extensions panel and clicked the gray arrow to the right of my application’s icon, then I cut and pasted it to here.

After I rebuilt the extension with the new manifest (I had to fill in the Private key file name this time, matching the one created the first time I packaged the app) I uploaded the XML and CRX files to the URLs shown in the manifest and XML file. I also set the Content-type of the CRX file to application/x-chrome-extension, though that’s probably not needed given that the file name ends in .crx. I uninstalled my app to get a clean environment and visited www.bibliote.ch/files/bookshelf.crx with Chrome. I was asked whether to install the app. When I agreed, it installed and worked file.

So, does auto-updating work? I changed the version numbers in the manifest and XML file, packaged the new version, and uploaded the new XML and CRX files. And, a little later, the new version showed up in my browser. Success!

If you want to try this yourself, the entire application directory I used for this is available here. You’ll have to create your own XML file by copying the one above and changing the values as needed.

I could stop here but there is one Chrome app feature I want and don’t yet have: synchronizing extensions across different browsers. From my reading of the documentation that should be working now, but it isn’t. Either applications like mine are treated special and not synchronized, or else you need to publish in the Chrome Web Store to get this functionality. I’d like to explore how that store works anyway, so I’ll try it out next time and see if synchronization starts working.

January 1, 2012

Chrome Web App Bookshelf – Part 5

Filed under: Uncategorized — Charles Engelke @ 6:21 pm

Note: this is part 5 of the Bookshelf Project I’m working on. Part 1 was a Chrome app “Hello, World” equivalent, part 2 added basic functionality, part 3 finally called an Amazon web service, and part 4 parsed the web service result. This part will actually reach the point of usefulness!

The app so far isn’t really useful because it doesn’t remember books to buy later. And that was the whole point. So instead of just displaying the response I’m going to save it persistently using the localStorage capability of HTML5. That’s simply an object (a property of the global window object) that retains its value even when a web page (or the browser as a whole) is closed. Each web site has its own localStorage area. There’s an API for it and it also can work pretty much like a regular JavaScript object. So I’m going to keep all the response data in it.

localStorage does have some pretty severe restrictions. The biggest one is that it can only store strings. At one time it was supposed to be able to store any object that could be serialized into a string, but as of now Chrome seems to only want strings there. So I’m going to have to serialize and deserialize all my objects to store them.

localStorage gives me a solution for my other problem, too. I can’t distribute the app with my Amazon Web Services credentials in it, but I can save the proper credentials persistently and let each user save his or her own values. So the app will have to have two faces now. One is the main app I’ve been working on, and the other is a simple one to use to enter the credentials. If there are no credentials in localStorage I’ll show the settings screen, otherwise I’ll show the main app.

So the main.html page now needs a body with two divs, one for each situation:

   <div id="settings">
   </div>
   <div id="application">
   </div>

It also needs to add a link to a stylesheet in the head of the document, as follows:

   <link rel="stylesheet" type="text/css" href="main.css" />

And, of course, it needs a stylesheet. It will start out very simple, just a rule to hide both divs. The JavaScript will show the proper div once it examines localStorage:

#settings, #application {
   display: none;
}

So, what about the main.js program? It starts out by again waiting for the document to be ready, declaring “global” variables, and setting up event handlers for buttons, but then it immediately checks localStorage to see whether to show the settings page or not:

   if (localStorage.accessKeyId && localStorage.secretAccessKey) {
      showApplication();
   } else {
      showSettings();
   }

The showSettings function is very simple:

   function showSettings(){
      $("#settings").show();
   }

All that is in that div are a couple of labeled data entry fields and a button to save the values entered into them. When that button is clicked, the handler just saves them in localStorage and starts the application:

   function saveSettings(){
      localStorage.accessKeyId = $("#accesskeyid").attr("value");
      localStorage.secretAccessKey = $("#secretaccesskey").attr("value");
      $("#settings").hide();
      showApplication();
   }

And showApplication just shows the proper div and creates an AWS object. The rest of its functionality happens when the button is clicked to add a new book to the list.

   function showApplication(){
      aws = new AWS(localStorage.accessKeyId, localStorage.secretAccessKey, "engelkecom-20");
      $("#application").show();
   }

That “engelkecom-20″ is my AWS associate ID. I’ll hard code that so that any URLs created by the application include it. That way, should anyone ever use this app I have a chance to make some commissions from Amazon sales.

The remaining big difference is that the result from looking up an item at Amazon will be saved in localStorage, and the results div will be replaced by a results unordered list. When the app first starts up and each time a book is looked up that list will be redrawn. This is accomplished by first replacing the reference to insertResponse in the aws.itemLookup function call with a reference to a new function called saveResponse:

      function saveResponse(response){
         localStorage.setItem("asin_"+response.asin, JSON.stringify(response));
         displayBookList();
      }

I’m naming each item with the prefix asin_ followed by Amazon’s ASIN for the item. That serves two purposes. It lets me look up a book directly by Amazon’s unique ID when I need to, and it lets me recognize which fields in localStorage hold book data and which don’t. Since I can only reliably store strings, I use the build-in JSON.stringify function to convert the object to a string without losing any information.

The displayBookList function will also be called at the end of the new showApplication function. In each case, it clears out all the items in the results unordered list and adds all the saved items back to it:

   function displayBookList(){
      var books = [];
      var i, key;

      $("#results li").remove();

      for(i=0; i<localStorage.length; i++) {
         key = localStorage.key(i);
         if (key.substr(0,5)=="asin_") {
            books.push(JSON.parse(localStorage.getItem(key)));
         }
      }

      books.sort(byReleaseDate).forEach(function(book){
         insertBook(book);
      });
   }

This first removes all the list items in the results list, then builds an array of all the books found in localStorage, and then calls insertBook to insert each book in the (by now sorted) array into the results list. insertBook is pretty much the same as the old insertResponse function shown before, with just a bit different HTML markup in it, so I’m not going to include it here. Note the use of JSON.parse to convert the string back to a JavaScript object.

Does it all work? Let’s see. First, when first launched it should show the settings panel, and it does:

Settings panel

After credentials have been saved, it should show pretty much the old application:

App screen with no books

And, once a few books have been added, it should show a list of them:

App showing a list of books

And even if the browser is closed and later re-opened, it still shows the list of books.

This was my core goal for this app. It’s ugly in a lot of ways (not just appearance; the code could use a lot of cleaning up, too). But I’ll see to that later. Now it’s time to move on and look at how to distribute the app. First, I’ll package it and try that out. Then I’ll try hosting it at a known address. Finally, I’ll put it in the Chrome Web Store. I’ll also put up a zip file of all the code created so far, so anybody who likes can try things out.

All of that will start in my next post.

December 28, 2011

Chrome Web App Bookshelf – Part 4

Filed under: Uncategorized — Charles Engelke @ 4:25 pm

Note: this is part 4 of the Bookshelf Project I’m working on. Part 1 was a Chrome app “Hello, World” equivalent, part 2 added basic functionality, and part 3 finally called an Amazon web service. This part will finally parse the web service result and refine the call.

When I got to the end of the last post, the app was just dumping a lot of XML, formatted as text, into the web page. I want to pull just the data fields I’m looking for, format them, and put that in the page instead. Those fields are: author, title, release date, availability, list price, Amazon’s price, and a link to the Amazon page. So I copied the text from the page to a file and viewed it in a program that showed the XML as an outline. It’s way too big to show here, but the structure of the response looked like this:

  • ItemLookup
    • Operation Request
    • Items
      • Request
      • Item
        • ASIN
        • DetailPageURL
        • ItemAttributes

It’s that Item element that actually has the response in it, with fields inside ItemAttributes containing most of the information I want. I see Author, Title, PublicationDate, and ListPrice inside of ItemAttributes, and DetailPageURL has the link I need. But I’m missing the availability and Amazon’s price. So back to the ItemLookup function’s documentation, which I’m a bit more ready to understand now. My request can include a specification of one or more ResponseGroup values. The default is just ItemAttributes, which is what I got in this sample response. But other groups might have the two fields I’m missing. After browsing through the documentation, I see that Offers includes both the Amount and Availability, so I’ll add that to my request. Doing so is easy: just change the line that specifies the requested ResponseGroup to:

      params.push({name: "ResponseGroup", value: "ItemAttributes,Offers"});

The resulting XML has what I need, structured as follows:

  • ItemLookup
    • Operation Request
    • Items
      • Request
      • Item
        • ASIN
        • DetailPageURL
        • ItemAttributes
        • OfferSummary
        • Offers
          • Offer
            • OfferListing

The extra information I’m looking for is in the OfferListing element, which contains Price and Availability fields. The Price field (like the ListPrice field mentioned earlier) is itself complex, containing an Amount (an integer, apparently equal to the whole number of cents), a CurrencyCode (USD in my example), and a FormattedPrice. I’m going to go with the FormattedPrice field for now, but I may want to change my mind later.

Okay, this XML has the data I want, how do I get to it? This is the job of extractAndReturnResult, which currently looks like:

      function extractAndReturnResult(data, status, xhr){
         onSuccess(xhr.responseText);
      }

I’m going to put a breakpoint in this function and examine the data, status, and xhr objects that are returned when I run the code. The status object is just a string with the word “success” in it, but data and xhr are much more complex. data is a Document that represents the DOM of the returned XML. xhr has many fields in it, one of which, responseXML, is also a Document. In fact, the debugger tells me that it is the exact same object as data. I can traverse this DOM to get the elements I want. There are native JavaScript ways to do this, but since I’ve already started using jQuery to traverse the web page’s DOM, I’m going to continue to use it to traverse this one.

For example, I can get the element ASIN by searching for element ASIN within Item within Items within the document, using:

         $(data).find("Items Item ASIN")

Actually, though, that will find an array of matching elements which might be empty. To keep it simple at this stage I’m just going to assume that the array has at least one element, and will take the first one as my result. Then I’ll find the text inside that element. That ends up with:

         var asin = $(data).find("Items Item ASIN")[0].textContent;

I’ll change the overall behavior of extractAndReturnResult to pass an object to the success handler instead of just a string, ending up with:

      function extractAndReturnResult(data, status, xhr){
         var result = {
            asin:          $(data).find("Items Item ASIN")[0].textContent,
            author:        $(data).find("Items Item ItemAttributes Author")[0].textContent,
            title:         $(data).find("Items Item ItemAttributes Title")[0].textContent,
            releaseDate:   $(data).find("Items Item ItemAttributes PublicationDate")[0].textContent,
            listPrice:     $(data).find("Items Item ItemAttributes ListPrice FormattedPrice")[0].textContent,
            availability:  $(data).find("Items Item Offers Offer OfferListing Availability")[0].textContent,
            amazonPrice:   $(data).find("Items Item Offers Offer OfferListing Price FormattedPrice")[0].textContent,
            url:           $(data).find("Items Item DetailPageURL")[0].textContent
         };
         onSuccess(result);
      }

Now I have to change the function that gets this result and puts it into the web page, since it’s no longer getting a string. That function used to be an inline function in the main.js file:

                     function(message){
                        message = message.replace(/&/g, "&amp;");
                        message = message.replace(/</g, "&lt;");
                        message = message.replace(/>/g, "&gt;");
                        $("#results").append(message);
                     },

I’m going to change this to refer to a named function called insertResponse, and define that function, shown below:

      function insertResponse(response){
         var html = '<a href="' + response.URL + '">';
         html = html + response.title + '</a> by ' + response.author;
         html = html + ' lists for ' + response.listPrice;
         html = html + ' but sells for ' + response.amazonPrice;
         html = html + '. It was released on ' + response.releaseDate;
         html = html + ' with availability ' + response.availability;
         html = html + '.';
         $("#results").append(html);
      }

It’s verbose, but shows the information. When I look up the same book now, I get a more usable response than before:

JavaScript: The Definitive Guide: Activate Your Web Pages (Definitive Guides) by David Flanagan lists for $49.99 but sells for $31.49. It was released on 2011-05-10 with availability Usually ships in 24 hours.

That’s a good stopping point. There’s still a lot to do before this is a releasable web app. At the very least, I need to check for empty responses and escape any special characters in the data I display. I also want to maintain a list of books, not just look up a single book, and have that list persist between different invocations of this program. So there’s plenty more to come.

December 11, 2011

Chrome Web App Bookshelf – Part 3

Filed under: Uncategorized — Charles Engelke @ 11:21 am

Note: this is part 3 of the Bookshelf Project I’m working on. Part 1 was a Chrome app “Hello, World” equivalent, and part 2 added basic functionality. This part will finally call an Amazon Web Service via JavaScript.

I’ve been approaching this project from the top down so far, starting with creating a nearly empty shell as a Chrome app, then putting in the necessary logic to make it perform a minimal function. The next thing to add is actually calling the Amazon Web Service that looks up an ISBN and returns information about the product. For that, I’m going to switch to a more bottom-up point of view, focusing at first on just that web service call.

I’m going to put the JavaScript for talking to AWS into a separate file called aws.js. That file needs to be loaded into the web page before any file that references it, and after any file it references. I’ll be using jQuery, so the script tags in the main.html page need to look like this:

   <script src="jquery.js"></script>
   <script src="aws.js"></script>
   <script src="main.js"></script>

Within aws.js I’m going to declare a single function that will be used as a constructor for an Amazon Web Services accessing object. That object will have methods to perform the actual calls. Any access to AWS requires credentials. The REST API (which is what I’ll use) requires an access key ID and a secret access key, so I’ll pass those as parameters to the constructor. The overall code will look like this:

var AWS = function(accessKeyId, secretAccessKey){
   var self = this;

   self.itemLookup = function(itemId, onSuccess, onError){
      // code to call AWS Product Advertising API ItemLookup function
   }
}

I could have just used this throughout, instead of defining self as a copy of it, but the JavaScript this variable is kind of tricky in what it references at various times. During the initial call to the constructor it definitely refers to the new object being created, so I’ll save it and use that saved value from then on. This library can be invoked with code like the following (with real access credentials and a sample ISBN in place of the examples):

var aws = new AWS('my access id', 'my secret key');
aws.itemLookup('1234567890',
               function(){alert('it worked');},
               function(){alert('it failed');});

Now, what does that missing code look like? AWS REST API calls use various HTTP methods, but most of them (including this one) just use GET with no special HTTP headers. So if we can build the right URL it will be easy to invoke it. The form of that URL is endpoint?parameters, where endpoint is a web address specific to the API family, and parameters is a normal query string of the form name1=value1&name2=value2&…namen=valuen where the names and values depend on the specific function.

The ItemLookup function I want to use is part of the AWS Product Advertising API. For that API, the endpoint is https://webservices.amazon.com/onca/xml (you can use the http version instead, but I always use the secure version if at all possible). Regardless of the function called, the parameters must always include:

  • Service – the value is always AWSECommerceService for this API
  • AWSAccessKeyId – the accessKeyId part of the credentials
  • AssociateTag – this is a new requirement since November 2011; I’m going to have to add this to either the code, the constructor call, or the method call
  • Operation – the name of the function, ItemLookup in this case
  • Timestamp – when the request was created; AWS will only honor it for 15 minutes to prevent future “replay” attacks
  • Signature – a cryptographic signature created from all the other parameters and the secret access key

The ItemLookup function requires additional parameters:

  • ItemId – identifies the item to find, or a comma-separated list of up to ten items
  • ResponseGroup – tells how much detail we want in the response; I’m going to have to experiment with the various possibilities to see which groups include the data I want

Instead of just creating the query string directly out of these parameters, I’ll use an array of name/value pairs in my code, create the signature from that, then build the query string to use. The code shapes up as follows:

      var params = [];
      params.push({name: "Service", value: "AWSECommerceService"});
      params.push({name: "AWSAccessKeyId", value: accessKeyId});
      params.push({name: "AssociateTag", value: associateTag});
      params.push({name: "Operation", value: "ItemLookup"});
      params.push({name: "Timestamp", value: formattedTimestamp()});
      params.push({name: "ItemId", value: itemId});
      params.push({name: "ResponseGroup", value: "ItemAttributes"});

      var signature = computeSignature(params, secretAccessKey);
      params.push({name: "Signature", value: signature});

      var queryString = createQueryString(params);
      var url = "https://webservices.amazon.com/onca/xml?"+queryString;

This code assumes that a variable named associateTag already exists. I’m going to add it as a parameter to the main constructor function to make that happen. This code also invokes several helper functions: formattedTimestamp, computeSignature, and createQueryString. I’m going to have to write them inside of this library. The code then needs to make an HTTP GET request to that URL and (if the call is successful) pull the desired data out of the response body, passing that to the onSuccess handler.

I’ll tackle the new functions first, from easiest to hardest. formattedTimestamp just needs to return the current time in a standard format: YYYY-MM-DDTHH:MM:SSZ (the T is a separator between date and time, and the Z indicates UTC time). Actually, I could cheat here if I wanted to. I’ve found that any date in the future is accepted by AWS, so I could hard code the result of this function as 9999-12-31T23:59:59Z. But that strikes me as a loophole in the service that may be closed in the future, so I’ll play fair here.

   function formattedTimestamp(){
      var now = new Date();

      var year = now.getUTCFullYear();

      var month = now.getUTCMonth()+1; // otherwise gives 0..11 instead of 1..12
      if (month < 10) { month = '0' + month; } // leading 0 if needed

      var day = now.getUTCDate();
      if (day < 10) { day = '0' + day; }

      var hour = now.getUTCHours();
      if (hour < 10) { hour = '0' + hour; }

      var minute = now.getUTCMinutes();
      if (minute < 10) { minute = '0' + minute; }

      var second = now.getUTCSeconds();
      if (second < 10) { second = '0' + second; }

      return year+'-'+month+'-'+day+'T'+hour+':'+minute+':'+second+'Z';
   }

createQueryString is a bit trickier, but not much. I just need to build a query string in the standard format. However, I have to remember to URI encode the names and values, in case they include any special characters. And I’m going to add the parameters in sorted order by name, because that will be useful later when computing a signature according to AWS’s rules.

   function createQueryString(params){
      var queryPart = [];
      var i;

      params.sort(byNameField);

      for(i=0; i<params.length; i++){
         queryPart.push(encodeURIComponent(params[i].name) +
                        '=' +
                        encodeURIComponent(params[i].value));
      }

      return queryPart.join("&");

      function byNameField(a, b){
         if (a.name < b.name) { return -1; }
         if (a.name > b.name) { return 1; }
         return 0;
      }
   }

This function actually changes the parameter it is passed: it sorts the array it is given. It would be better behaved to make a copy and sort the copy, but instead I’ll just note this fact and keep it simpler.

Now it’s time for the hard one, computeSignature. Actually, with the steps already taken it’s not that hard any more. The AWS signature is a 256-bit SHA HMAC of a special string that includes the HTTP method, the host name of the end point, the path of the request, and the unsigned query string (as created above), signed using the secret access key. Of course, doing that cryptographic operation would be pretty hard, but I don’t have to. I can use the Stanford JavaScript Crypto Library. I downloaded the minified version of it and put it in my project folder in the file sjcl.js, and loaded it in the main page with a script tag before the aws.js reference there. With that in place, the computeSignature function is not too hard:

   function computeSignature(params, secretAccessKey){

      var stringToSign = 'GET\nwebservices.amazon.com\n/onca/xml\n' +
                         createQueryString(params);

      var key = sjcl.codec.utf8String.toBits(secretAccessKey);
      var hmac = new sjcl.misc.hmac(key, sjcl.hash.sha256);
      var signature = hmac.encrypt(stringToSign);
      signature = sjcl.codec.base64.fromBits(signature);

      return signature;
   }

The signing looks more complicated than it is because the hmac.encrypt function operates on bit strings, not normal JavaScript character strings, so there are extra steps to convert those back and forth.

With those preliminaries out of the way the code can create the URL to use to call the service. I’ll use jQuery to make the Ajax call:

      jQuery.ajax({
         type : "GET",
         url: url,
         data: null,
         success: extractAndReturnResult,
         error: returnErrorMessage
      });

This will call the URL and send the successful response to extractAndReturnResult or an unsuccessful one to returnErrorMessage. I’ve got to write those two functions, and then should be done.

      function extractAndReturnResult(data, status, xhr){
         onSuccess(xhr.responseText);
      }

      function returnErrorMessage(xhr, status, error){
         onError('Ajax request failed with status message '+status);
      }

Both these functions need a lot of work! In particular, extractAndReturnResult doesn’t do what its name says at all. It just returns the raw response from Amazon. But that’s going to be useful for exploring the different options on the call, so I’m keeping it that way for now.

Putting all the above together (and adding the necessary associateTag parameter to the constructor), the aws.js file is:

var AWS = function(accessKeyId, secretAccessKey, associateTag){
   var self = this;

   self.itemLookup = function(itemId, onSuccess, onError){
      var params = [];
      params.push({name: "Service", value: "AWSECommerceService"});
      params.push({name: "AWSAccessKeyId", value: accessKeyId});
      params.push({name: "AssociateTag", value: associateTag});
      params.push({name: "Operation", value: "ItemLookup"});
      params.push({name: "Timestamp", value: formattedTimestamp()});
      params.push({name: "ItemId", value: itemId});
      params.push({name: "ResponseGroup", value: "ItemAttributes"});

      var signature = computeSignature(params, secretAccessKey);
      params.push({name: "Signature", value: signature});

      var queryString = createQueryString(params);
      var url = "https://webservices.amazon.com/onca/xml?"+queryString;

      jQuery.ajax({
         type : "GET",
         url: url,
         data: null,
         success: extractAndReturnResult,
         error: returnErrorMessage
      });

      function extractAndReturnResult(data, status, xhr){
         onSuccess(xhr.responseText);
      }

      function returnErrorMessage(xhr, status, error){
         onError('Ajax request failed with status message '+status);
      }
   }

   function formattedTimestamp(){
      var now = new Date();

      var year = now.getUTCFullYear();

      var month = now.getUTCMonth()+1; // otherwise gives 0..11 instead of 1..12
      if (month < 10) { month = '0' + month; } // leading 0 if needed

      var day = now.getUTCDate();
      if (day < 10) { day = '0' + day; }

      var hour = now.getUTCHours();
      if (hour < 10) { hour = '0' + hour; }

      var minute = now.getUTCMinutes();
      if (minute < 10) { minute = '0' + minute; }

      var second = now.getUTCSeconds();
      if (second < 10) { second = '0' + second; }

      return year+'-'+month+'-'+day+'T'+hour+':'+minute+':'+second+'Z';
   }

   function createQueryString(params){
      var queryPart = [];
      var i;

      params.sort(byNameField);

      for(i=0; i<params.length; i++){
         queryPart.push(encodeURIComponent(params[i].name) +
                        '=' +
                        encodeURIComponent(params[i].value));
      }

      return queryPart.join("&");

      function byNameField(a, b){
         if (a.name < b.name) { return -1; }
         if (a.name > b.name) { return 1; }
         return 0;
      }
   }

   function computeSignature(params, secretAccessKey){

      var stringToSign = 'GET\nwebservices.amazon.com\n/onca/xml\n' +
                         createQueryString(params);

      var key = sjcl.codec.utf8String.toBits(secretAccessKey);
      var hmac = new sjcl.misc.hmac(key, sjcl.hash.sha256);
      var signature = hmac.encrypt(stringToSign);
      signature = sjcl.codec.base64.fromBits(signature);

      return signature;
   }
}

The main.js file needs a little tweaking now to call this properly. The new version is:

$(document).ready(function(){
   var aws = new AWS('my access id', 'my secret key', 'my associate id');
   $("#lookup").click(lookupIsbn);

   function lookupIsbn(){
      var isbn = $("#isbn").attr("value");
      aws.itemLookup(isbn,
                     function(message){
                        message = message.replace(/&/g, "&amp;");
                        message = message.replace(/</g, "&lt;");
                        message = message.replace(/>/g, "&gt;");
                        $("#results").append(message);
                     },
                     function(message){
                        alert("Something went wrong: "+message);
                     }
                     );
   }
});

There are only two real changes here. First, the constructor is called first, to get an object for working with AWS before anything else happens. Second, instead of just dumping the response message in the web page the code first replaces all special HTML characters with their equivalent character entities. That way, the message will be shown as text instead of interpreted as HTML, possibly including code.

And now I’m ready to go. I put an ISBN in the text box and pressed the button… and got this:

Error message: Something went wrong: Ajax request failed with status message error

That doesn’t tell me much, though. But the JavaScript console (opened with Control-Shift-J) is more helpful:

Origin is not allowed by Access-Control-Allow-Origin

Web browsers do not allow pages to make requests to other addresses, so this request was disallowed. That’s a security restriction. There is a new Cross-Origin Resource Sharing specification that allows this when the target web site decides it is safe to do, but AWS doesn’t support it. Not yet, anyway; I’m still hoping. However, Chrome apps can bypass this restriction if they ask. The manifest.json file needs to be changed to request this:

{
   "name": "Books to Buy",
   "description": "Keep a list of books to buy on Amazon, with their price and availability",
   "version": "1",
   "app": {
      "launch": {
         "local_path": "main.html"
      }
   },
   "icons": {
      "16":    "icon_16.png",
      "128":   "icon_128.png"
   },
   "permissions": [
      "https://webservices.amazon.com/*"
   ]
}

The permissions entry tells Chrome to allow this app to make requests to any URL matching the wild card given. I removed and reinstalled the app after making this change, and tried again. And got this:

Page showing XML response from AWS

Success! Sort of. A lot of XML came back from the request, and I need to pull the necessary data out of it. I also need to explore various response groups to get the data I need. And all that will be the subject of the next post in this series.

December 6, 2011

Chrome Web App Bookshelf – Part 2

Filed under: Uncategorized — Charles Engelke @ 11:54 pm

Note: this is part 2 of the Bookshelf Project I’m working on. Part 1 was a Chrome app “Hello, World” equivalent.

Now that we can build a web page and install it as an app in Chrome it’s time to make the page do something. Ideally, something to do with Amazon Web Services. This project is going to work by incrementally adding features, and I will start small. I want a page that has a field to enter an ISBN and a button to ask it to be looked up at Amazon. Information about the matching book (or an error message if there isn’t one) will be displayed below that in the page. While I’m at it, I’ll also change the page title and add a header explaining what the page is.

The new page is still quite simple:

<!DOCTYPE html>
<html lang="en">
<head>
   <meta charset="utf-8" />
   <title>Books to Buy</title>
</head>
<body>
   <h1>Books to Buy</h1>
   <div id="dataentry">
      <input type="text" id="isbn" />
      <button id="lookup">Look Up!</button>
   </div>
   <div id="results">
   </div>
</body>
</html>

If you have already created and installed the basic web app, you can edit the main.html file to match this. When you next run or refresh the app, you should see something like the following:

Books to Buy page, first try

Of course, if you enter an ISBN and press the button nothing happens. I have to write JavaScript to respond to the button press, call an AWS API to look up the information, parse the information, and place it into the empty results div.

I don’t like to put JavaScript in my web pages directly, so I’ll create a separate file for it and load it by putting a script tag right after the title. There are people who argue for placing script tags at the very end of a page for performance reasons but I don’t see it making much difference here, and I still like them near the top. I’ll put the JavaScript code in a file called main.js, and add a line right after the title tag:

   <script src="main.js"></script>

Since this page is HTML5 (thanks to the <!DOCTYPE html> declaration at the top), I don’t need to specify that this is a JavaScript file; HTML5 assumes all script files are. I don’t use a self-closing tag because that often (maybe always) doesn’t work for reasons I don’t understand.

After saving the file and hitting refresh, nothing looks different. Because the new JavaScript file doesn’t yet exist. I brought up the Chrome Developer Tools by hitting Ctrl-Shift-J, and saw this error message in the console:

chrome-extension://jpnlfejeoenacfaonfmmdiofnheemppo/main.js Failed to load resource

By the way, from this I see that the browser refers to my app with a URL starting with chrome-extension:// followed by an apparently randomly assigned string. I don’t know how that will be useful, but it’s interesting.

I need to create a main.js file in the same folder as the main.html file, and put code in it to:

  • Attach an event handler to the button, so that when a user clicks it my code will run.
  • Have that code read the ISBN from the input text box.
  • Call the AWS service to look up the information for that ISBN.
  • If the call works, pull the necessary data out of the response and display it in the results div.
  • If the call fails, either put an error in the div or pop up an alert box.

I’m going to use jQuery to help with this work. That’s a JavaScript library that adds a lot of useful features to JavaScript, and which handles subtle variations between how different browsers implement JavaScript. That second benefit is less important with HTML5, which causes browsers to behave much more consistently than ever before, but I’m used to jQuery and want to use it. I have to download it (either the compressed or uncompressed one will work) and put a copy of it in the same directory as the main web page. I’ll call that downloaded file jquery.js and I’ll add a script tag for it just before the main.js script tag:

   <script src="jquery.js"></script>

Now, what goes in the main.js file? The first thing to do is to attach an event handler to the button’s click event. That’s easy with jQuery:

   $("#lookup").click(lookupIsbn);

The $ is actually a jQuery JavaScript function name (it’s an alias for a function named jQuery). If you give it a string with a CSS selector (which #lookup is, referring to the element with the id lookup) it will return a jQuery object referring to that element, which has added to it a lot of useful methods. One of the methods it adds is click, which takes a function as a parameter. In this case the code is passing a function called lookupIsbn, which means that function should be invoked whenever anybody clicks that button.

There are two problems with this line. The first is pretty obvious: it says to run a function called lookupIsbn but there is no such function. Not yet. I’ll write it soon. The second is more subtle. The browser will execute the JavaScript as soon as possible, which may be before the web page has been fully read and processed. So there may not be an element with id lookup when this code runs and nothing will happen. Or maybe the timing will work out okay and this will do what I want. That would actually be worse because then the code would randomly succeed or fail. I’d rather have consistent behavior, even if that’s consistent failure.

The browser builds a data structure for each page as it reads it, starting with the document element that contains everything else. When it finishes building the page it triggers an event handler on that document element. So I can set up that event to run this code, making it run once the page is ready. jQuery makes that easy by adding a ready method to the document element when we wrap it. So the code should be:

$(document).ready(attachClickHandlerToButton);

function attachClickHandlerToButton(){
   $("#lookup").click(lookupIsbn);
}

In fact, though, people rarely define a named function (like attachClickHandlerToButton) to deal with an action that will happen only once. Instead, they define an anonymous function in place, as follows:

$(document).ready(function(){
   $("#lookup").click(lookupIsbn);
});

I could use the same trick in place of lookupIsbn, but I get uncomfortable when I nest anonymous functions too deeply. The browser’s fine with it, but I’m not. So I have to write lookupIsbn now. That can be defined after the JavaScript above, but I’d rather define it inside the anonymous function, like so:

$(document).ready(function(){
   $("#lookup").click(lookupIsbn);

   function lookupIsbn(){
   // put the code here
   }
});

This prevents any JavaScript code outside of the anonymous function from seeing or using the lookupIsbn function. I don’t much care if they could use it, but if some other code (perhaps in an included third-party library) used the same function name things would get troublesome. This keeps my function definition private and avoids interference with other code.

What goes in there seems pretty straightforward. I have to read the ISBN from the input box, ask AWS for information about that ISBN, parse the result into a readable form, and then display it. That would be something like:

      var isbn = $("#isbn").attr("value");
      var message = askAwsAboutIsbn(isbn);
      $("#results").append(message);

The first line finds the element with id isbn, which is the input element, gets the contents of the value attribute (which is what the user entered), and saves it in a new variable named isbn. The second line magically asks AWS for information, presumably getting a nicely formatted chunk of text back. The last line finds the element with id results and puts an element built from the message text inside of it.

There are some things wrong here. If the message coming back from AWS has HTML inside of it this code will insert it directly into the page, which might end up even running code. I’ve got to fix that. But a bigger problem is the magic askAwsAboutIsbn function call. My code calls the function, waits for a response, then uses the result. But that’s going to involve talking to a remote web site, which is relatively slow. My web page is going to be frozen while waiting for that answer.

The way to handle this freeze is to make the request asynchronous. That is, call askAwsAboutIsbn to get the answer, and give it a function to call when it’s done. Then immediately return instead of waiting for the answer. To do that, the magic askAwsAboutIsbn has to be told not only what ISBN to look up, but also given a function to execute when it’s done. So the code should look something like this:

      var isbn = $("#isbn").attr("value");
      askAwsAboutIsbn(isbn, function(message){
         $("#results").append(message);
      });

This magic box will get the answer without the main program waiting for it. When it has it, it will call the anonymous function, passing the message it got to it, and that function will put the message in the right place on our page. So all that’s left is to write askAwsAboutIsbn. But that’s a pretty tall order, so I’m going to leave it for next time. For now, I’ll just write a stub that returns a canned response:

      function askAwsAboutIsbn(isbn, handleResult){
         handleResult("I don't know the info for "+isbn+" yet.");
      }

This stub just immediately calls the function it was given with a canned response as the parameter. Its purpose is just to see if everything is wired together right.

Putting everything together, there is now a folder called bookshelf that contains six files: main.html, main.js, jquery.js, icon_128.png, icon_16.png, and manifest.json. I haven’t changed the last three of those files and I just downloaded jquery.js. The other two files are the main.html file:

<!DOCTYPE html>
<html lang="en">
<head>
   <meta charset="utf-8" />
   <title>Books to Buy</title>
   <script src="jquery.js"></script>
   <script src="main.js"></script>
</head>
<body>
   <h1>Books to Buy</h1>
   <div id="dataentry">
      <input type="text" id="isbn" />
      <button id="lookup">Look Up!</button>
   </div>
   <div id="results">
   </div>
</body>
</html>

and the main.js file:

$(document).ready(function(){
   $("#lookup").click(lookupIsbn);

   function lookupIsbn(){
      var isbn = $("#isbn").attr("value");
      askAwsAboutIsbn(isbn, function(message){
         $("#results").append(message);
      });
   }

   function askAwsAboutIsbn(isbn, handleResult){
      handleResult("I don't know the info for "+isbn+" yet.");
   }
});

If all the files are right the application should work when I enter an ISBN and click the button. And it does:

First version of page showing the result

That’s enough for this post. Next time I’ll actually use the AWS web service to look the information up for the given ISBN, parse the result, and display it. There will be plenty to do after that, though: saving data persistently, improving the display, adding a settings page, and packaging the app. So there’s a lot more to come.

December 5, 2011

Chrome Web App Bookshelf – Part 1

Filed under: Uncategorized — Charles Engelke @ 10:44 am

Note: this is part of the Bookshelf Project I’m working on.

Before I can do anything useful in a Chrome Web Application, I’ve got to figure out the very basics. This post is going to cover my “Hello, World” equivalent start, maybe going a bit deeper than that.

At a minimum, my Chrome web app needs four things:

  1. A web page to display and run.
  2. An icon to show on the Chrome applications page.
  3. A favicon.
  4. A manifest describing where the above pieces are, plus anything else I end up needing.

I started by creating a folder to put all these pieces in. I named it bookshelf, but it could have been called anything.

My first web page is going to be just about the bare minimum for HTML5:

<!DOCTYPE html>
<html lang="en">
<head>
   <meta charset="utf-8" />
   <title>A sample page</title>
</head>
<body>
   <p>This is the sample page.</p>
</body>
</html>

I put that text into a file called main.html in my folder, then went searching for icons. I found a very nice one, in a variety of sizes, at designmoo.com. It’s called Book_icon_by_@akterpnd, and is licensed under a Creative Commons 3.0 Attribution license, so there should be no problems with my using it here. I need a 128 by 128 icon for the application page, and 16 by 16 for the favicon. The set didn’t have a 16 by 16 icon in it, so I resized the smallest one for that. I called the two icons I ended up with icon_128.png and icon_16.png, and put them in my folder, too. They look pretty good, don’t they?

128 by 128 icon for project16 by 16 icon for project

Now I have to write the manifest file. It’s in JSON format, which is just text in a specific syntax. With a text editor I created mainfest.json in my folder, with the following content:

{
   "name": "Books to Buy",
   "description": "Keep a list of books to buy on Amazon, with their price and availability",
   "version": "1",
   "app": {
      "launch": {
         "local_path": "main.html"
      }
   },
   "icons": {
      "16": "icon_16.png",
      "128": "icon_128.png"
   }
}

You can see that this file points my three other files, and also gives the app a name, description, and version. I don’t know what the best practices are for the version numbering, so for now I’ll just keep it at 1.

I think I have a complete Chrome app now. You can create a folder with these files to see for yourself. Once you have these four files in a folder, open Chrome and click the wrench icon in the upper right to get the menu. Select Tools, then Extensions. Check the “Developer Mode” box on the resulting page if you haven’t already, then click the Load Unpacked Extension button. Select the folder with the manifest in it, and you should be good to go. It should look like this:

Chrome extensions page showing the new app

Pretty nice, I think. Go ahead and close this tab. To run the app (with the version of Chrome current as I write this), open a new tab and click Apps on the bar at the bottom of the resulting page. The new app should show up at the end of the page:

New tab page showing new app

Click on the application icon, and the page should open, and even have the right favicon:

The web app, opened

Okay, that’s not much, but this post is the “Hello, World” equivalent. Next time we will add a skeleton for a minimal application, one where you can enter an ISBN and have the page look it up and display the result in the page.

Next Page »

The Rubric Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 33 other followers