Charles Engelke’s Blog

December 4, 2011

The Bookshelf Project – Using Amazon Web Services from JavaScript

Filed under: Uncategorized — Charles Engelke @ 8:08 pm
Tags: , , ,

Many years ago, I got frustrated with using Amazon’s “save for later” shopping cart function to keep track of books I probably wanted to buy someday. The problem I was trying to solve was that I’d find out about an upcoming book by one of my favorite authors months before publication and I didn’t want to forget about it. I could have just preordered the book, but back then there was no Amazon Prime so I always preferred to bundle my book orders to save on shipping. So I’d add the book to the shopping cart and tell it to save it for later. But (at least back then) Amazon was willing to save things in your cart for only so long, and my books would often disappear from the cart before they were published.

I’m a programmer, and Amazon had an API (application program interface), so I did the obvious thing: wrote a program to solve my problem. It was just for me, so I wrote the simplest thing that could possibly work, figuring I’d improve it some day. It was a simple Perl CGI script that I ran under Apache on my personal PC. It used the (then very primitive) Amazon Web Service to look up the book’s information given an ISBN, and saved its data in a tab delimited text file.

That was a long time ago, probably very soon after Amazon introduced its first web services. And I’m still using it today with almost no changes. But I’m no longer happy with it, for several reasons:

  • It only recognizes the old 10 digit ISBN format, not the newer 13 digit one.
  • It can’t find Kindle books at all.
  • It runs only on a PC running an Apache webserver.
  • The data is available on only that device.

The cloud has spoiled me. I want this program to run on any of my web-connected devices, and I want them all to share a common data store. Hence this project.

“Run on any of my web-connected devices” pretty much means running in a browser, so I’ll have to write it in HTML and JavaScript. I’ll use HTML5 and related modern technologies so I can store data in the browser so I can see my saved book list even when off-line.

I know HTML and JavaScript but I’m no expert, so I’m going to build this incrementally, learning as I go. Step 1 will be to get a web page that just uses Amazon Web Services (AWS) to look up the relevant information given an ISBN. And right away, that’s going to require a detour. As a security measure, web browsers won’t let a web page connect (in a useful enough way) to any address but the one hosting the web page itself. My web page isn’t going to be at the same address as AWS, so it seems this is a hopeless task.

There is a way out, called Cross Origin Resource Sharing (CORS). The target web site can tell the web browser that it’s okay, it’s safe to let a “foreign” web page access it. Modern browsers support CORS, so I should be okay. Unfortunately, AWS doesn’t (yet) support CORS, so that’s out. Foiled again!

But there is a stopgap. I can create a Chrome Web Application. That’s pretty much just a normal web page, except that it can tell the web browser to allow access to foreign services. And that’s just what I will do, starting in my next blog post. That will take a while, but after that’s done, I can explore various directions to take it:

  • Maybe AWS will support CORS soon, in which case I’ll be able to use almost the exact same solution on any modern web browser, even on tablets and phones.
  • I can always write server-side code to “tunnel” the web service requests through my server on the way to AWS. That works, but I think it’s inelegant.
  • I might try creating an HP TouchPad application, which uses the same kinds of technologies as the web, but to create native apps. I find that approach very appealing, even though the TouchPad is more-or-less an orphan device now. I’ve got one, and this would be an excuse to develop for it.
  • Tools like PhoneGap let you wrap a web application in a shell to allow it to run as a native app on various mobile platforms. I think they allow operations that normal browsers block, such as CORS. I could find out, anyway.

So I’ve got a lot of potential things to learn and try. First up: creating a Chrome web application, in many steps. If it comes out nice, I’ll even try publishing it in the Chrome Web Store.

November 28, 2011

HTTP Strict-Transport-Security

Filed under: Uncategorized — Charles Engelke @ 3:28 pm
Tags: ,

I figured it would be easy add HTTP Strict-Transport-Security to a web application, so I gave it a try. It was easy to add it. But not that easy to get it to work.

The purpose of Strict-Transport-Security is for a web site to tell browsers to only connect to the site via a secure connection, even if the user just enters a normal http:// URL. The site I was working on already redirected all http requests to a secure connection, so what’s the point of this here? In this case, just to avoid a potential (but for this site, unlikely) man-in-the-middle attack when the user first connects. That non-secure initial request could be intercepted and the user redirected to some other site that looks right, but is an imposter. With Strict-Transport-Security, the browser will never even make the initial non-secure request, avoiding this possibility.

Adding this to a web site is trivial: just return a special header with the secure web pages. For example, I returned:

Strict-Transport-Security: max-age=7776000

Once the browser sees this header from any secure page on your website, it is supposed to remember (for the next 7,776,000 seconds, or 90 days) to never try to connect to the site other than by a secure connection. It’s also supposed to prevent the user from overriding any SSL certificate warnings, so if somebody does spoof your website with a bad certificate it won’t even give the user a chance to override the warnings and connect to the site.

Only it didn’t do that. It didn’t do anything at all that I could tell. I had a self-signed certificate, and the browser let me override the warning. I had a certificate with a name not matching the URL, and the browser let me override the warning. I tried to connect via http instead of https, and the browser went ahead and did it. (By the way, by “browser” I mean Chrome, but Firefox and Opera behaved the same way.)

It turns out that the browser will only obey this header if it is sent from a secure web page (as clearly stated in the documentation) that has no certificate warnings or errors (something I didn’t realize). Once I set up my test site to appear to be at the production URL, the Strict-Transport-Security header started working as expected.

I wasn’t expecting it to work this way, but it turns out to be really useful behavior. Now I can deploy test and development sites with wrong certificates (self-signed, or for the production instead of test URL) and not have Strict-Transport-Security lock me out. It only activates the lock in the first place if you’ve already shown you can open it, by first connecting to the secure site with no certificate problems. Now I just have to be careful to check that it does work when put into production.

November 17, 2011

Kindle Fire out of box experience #kindle

Filed under: Uncategorized — Charles Engelke @ 8:37 am
Tags: , ,

My new Kindle Fire was waiting for me when I got home last night. So far, I’m very impressed.

It was packaged in a custom cardboard shipping box, opened by peeling off a well-marked strip. Once opened, there were only three things in the box: the Kindle Fire itself, a micro-USB power supply, and a small card welcoming the user and telling how to turn it on. The Kindle Fire was in a plastic wrapper that was a bit hard to slide off, though I could have just torn it off if I’d been in a hurry.

I guess I hit the power button while I was removing the plastic wrapper, because once I got the Fire out it was already turned on. I had to drag a ribbon (from right to left for a change) to get the first welcome screen to show.

What a contrast to when I turned on an iPad for the first time! The iPad just showed me an icon ordering me to connect it to a PC (which also required downloading and installing iTunes). With the Kindle Fire, I was just taken through a short dialog. I was first prompted to connect to a Wi-Fi network. The unit showed me available ones, I picked mine, and entered the password when prompted. The I went to a registration screen, which in my case didn’t require any effort at all because Amazon had already set it up. It then started downloading a software update and suggested I plug it in to get a full charge. I don’t know why a brand new unit should need a software update, but this was only a minor annoyance.

It was already about 90% charged, but I plugged it in anyway and waited the couple of minutes the download required, and then got back to the unit. And that was it, the Kindle Fire was ready to use, and registered with Amazon. All my books and music were available immediately (when I clicked the Cloud button instead of the Device button), as well as several apps.

I opened my current book and it took only a few seconds to download it to the device and open it to the current page I was reading. Amazon Prime video played back perfectly, as did my music I already had in Amazon’s cloud. I installed Netflix and entered my credentials, and it played back great as well.

Oh, I also installed the Barnes and Noble Nook application. That wasn’t in Amazon’s app store (go figure) but was easy to get using GetJar. It works great, too. Though I’m unlikely to actually buy any non-free books with it, because why would I? Thanks to the apparent collusion between Apple and the major book publishers, most book prices are fixed and cost the same regardless of seller. I like Amazon’s ecosystem, so there’s no reason to deal with anybody else.

How do I like the Kindle Fire as a tablet? It’s too early to tell much, but the smaller form factor is definitely better for me than the iPad or Galaxy Tab 10.1. It’s easy to hold it in one hand while using it, and the display is plenty large enough to use well. I think this form factor is going to become much more common than the larger ones.

August 23, 2011

SSL Client Authentication with Apache and Subversion

Filed under: Uncategorized — Charles Engelke @ 3:26 pm

Setting up Subversion with the Apache web server is pretty easy. Setting it up with SSL is still not too difficult. But I’ve been trying to do that plus require the Subversion client to authenticate to the server using an SSL client certificate. That’s not so easy. I’m not quite there yet, but I’m close and don’t want to forget what I’ve done so far, so I’m documenting it here. When I’m completely done I’ll update this post with the final steps.

First I needed a server.

I chose a Linux server on Amazon Web Services Elastic Compute Cloud. The one I chose is currently the second one in the list, “Basic 64-bit Amazon Linux AMI”, and I launched it as a micro instance since Subversion shouldn’t require much in the way of computing power. This kind of server costs only two cents an hour to run, or less than $15 per month.

Next, I logged on to the server, became root, and used yum to install the basic software needed:

   yum -y install httpd subversion mod_ssl mod_dav_svn

And started the web server:

   service httpd start

Now I had a working web server, both over HTTP and HTTPS (SSL). When I tried to connect to the SSL server my browser warned me that I shouldn’t trust it; that’s because it has a self-signed SSL certificate that mod_ssl installed by default. I may fix that later by buying a commercial certificate, but maybe not. I know the certificate is okay because I installed it myself.

Step 3 is to get Subversion working.

That’s a two part operation: set up an area for the repositories (and set up the repositories in it), and then configure Apache to know about that area.

Part 1: set up the area and the first repository (still as root):

   mkdir /var/www/svn
   svnadmin create /var/www/svn/project1
   chown -R apache:apache /var/www/svn/project1

I’m not a Linux sysadmin so I don’t know whether /var/www/svn is a good place for this, but it works. The chown command is important because the Subversion operations will be performed by the Apache web server, so it needs permission to operate on the repository. By default, these repositories are readable by anybody on the server machine but can only be written to by the Apache web server.

Part 2: configure Apache:

For the Linux flavor I’m using the web server configuration files are in /etc/httpd. The main configuration file is /etc/httpd/conf/httpd.conf and the SSL server configuration is /etc/httpd/conf.d/ssl.conf. There’s also a Subversion configuration file in /etc/httpd/conf.d/subversion.conf but it doesn’t seem to be up to date with changes in Apache web server version 2.2.

In any case, I thought I needed to add (or uncomment) a line in the LoadModule section of the main configuration file, but it was already included automatically as part of the subversion.conf file:

   LoadModule dav_svn_module modules/mod_dav_svn.so

That line is the module that actually provides Subversion functionality to the Apache web server. There will be other lines needed for user authentication which will have to be added once I’m done getting client certificate authentication working right.

Next, I had to add a Location section to the SSL server configuration file. Let’s say I want my Subversion repositories to be accessed via https://my.host.name/svn/repositoryname. I added the following section the SSL configuration file:

   <Location /svn>
      DAV svn
      SVNParentPath /var/www/svn
   </Location>

When I restarted the web server (with service httpd restart) I had a working Subversion server over SSL. I tried it from my PC:

   svn checkout https://yourserver/svn/project1 project1

As with the web browser, I was warned about the server’s self-signed certificate, but can choose to connect anyway. Since I haven’t set up any authentication yet, the checkout didn’t ask who I was. I even added a file and checked it in for practice, then pointed my web browser to https://my.host.name/svn/project1 to see the file there. It worked fine. Now to mess that all up!

Step 4: Configure the server to demand a client certificate.

First I just want to get the web server to demand a good certificate from the client. Once that works I’ll look into using the identification in that certificate to control access to Subversion.

Inside the <Location> section for Subversion, I added three lines as follows:

   <Location /svn>
      DAV svn
      SVNParentPath /var/www/svn
      SSLVerifyClient require
      SSLVerifyDepth 10
      SSLCACertificateFile /etc/httpd/conf/myca.crt
   </Location>

The SSLVerifyClient line tells Apache to require the connecting client to authenticate the SSL connection using an acceptable certificate; the other two lines tell Apache how to make sure the certificate is acceptable. The SSLVerifyDepth directive says that it will follow a chain of certificates up to 10 deep if needed, and the SSLCACertificateFile is the location of the Certificate Authority root file you want the certificates to have been issued by. Following a chain 10 deep is probably overkill, but won’t hurt anything.

I’ve specified that my CA root certificate is in /etc/httpd/conf/myca.crt. That’s not a standard location for certificate authorities, but eventually I’ll want to use a private certificate authority just for this purpose, and not rely on public ones.

When I try to restart the web server with service httpd restart it fails because /etc/httpd/conf/myca.crt doesn’t exist. So, just to get past this step for the moment, I copied my server’s certificate there:

   cp /etc/pki/tls/certs/localhost.crt /etc/httpd/conf/myca.crt

Now when I tried to start the server it works. But when I went to my working copy on my PC and tred to do an svn update I couldn’t. I was prompted for a client certificate. Which I don’t have yet.

I tried viewing the repository with a web browser, and that didn’t work, either. But at least Chrome gave me a hint:

Text of Chrome error message

So it was time to get a client certificate. Eventually I want my own private certificate authority, but first I just want to get this working in a minimal way today. I went to Verisign and got a free trial “Digital ID for Secure Email” at www.verisign.com/digital-id/index.html. (I used Internet Explorer to request and fetch the certificate.)  Then I exported it (including the private key) to the file testcert.pfx. I had to make up a password for it to export it.

I tried an svn update in my working copy again, and this time when I was prompted for a certificate I gave it that file. And got a new error message:

   C:\temp\project1> svn up
   Authentication realm: https://my.host.name:443
   Client certificate filename: testcert.pfx
   Passphrase for 'C:/Temp/project1/testcert.pfx': ********
   svn: OPTIONS of 'https://my.host.name/svn/project1': Could not
   read status line: SSL error: tlsv1 alert unknown ca (https://my.host.name)

Since the Subversion client is pretty complex in its own right, I decided to start debugging with a browser first. But it showed the same error message as before. I think I’m going to need to set up a correct CA root certificate.

I got the root certificate out of the Windows certificate store. When I examined the digital ID in that store via Control Panel / Internet Options / Content / Certificates / Personal, the certification path shows it is signed by “VeriSign Class 1 Individual Subscriber CA – G3″, which in turn is signed by “Verisign Class 1 Public Primary Certification Authority – G3″. I looked for the first one in the Intermediate Certification Authorities store and exported it (in the base 64 format) to C:\Temp\intermediateca.cer., uploaded it to the server and put it in /etc/httpd/conf/myca.cer. After restarting the server, I tried again in the browser.

Success! It asked me for a certificate, and showed me the one I had installed from Versign. I clicked okay, and… Failure! Same error message as before.

So I repeated the steps, but with the second, full root certificate from the Trust Authorities Windows certificate store. And this time, it worked! In Chrome. Not in Internet Explorer at first; I had to close all of its windows and start over. But the Subversion client still doesn’t work.

Step 5: Configure the Subversion client

The problem turns out to be that the Subversion client doesn’t trust the testcert.pfx certificate I provide. This doesn’t make any sense to me; why would I provide the certificate if the client shouldn’t trust it? It’s the server that needs to know it’s trustworthy, not the client.

Well, it doesn’t matter what I think, the Subversion client isn’t going to authenticate with the server unless is decides that the certificate was issued by a trusted certificate authority. So I had to provide a root certificate and tell the Subversion client where to find it.

I’ll cut to the chase. The configuration file that needs to be edited in C:\Users\myname\AppData\Roaming\Subversion\servers. There’s a commented out line in it like the following:

   # ssl-authority-files = /path/to/CAcert.pem;/path/to/CAcert2.pem

I uncommented it and changed it to:

   ssl-authority-files = /Temp/intermediateca.cer

(Recall that I exported that file from the Windows certificate store in the previous step.) Then I tried using the Subversion client to do an update, and it finally worked. Unlike the server, it wanted the immediate parent certificate of my client certificate, not the ultimate root.

I still think requiring the root certificate on the client software is odd, but it turns out that Firefox works the same way. If I want to browse my Subversion repository I need to import both the client certificate and the root certificate to Firefox first.

What’s next?

The client is authenticating with the certificate, but it will accept any certificate from VeriSign right now, not just the ones I specifically want. And the repository doesn’t know who is authenticating; the Subversion “blame” listing leaves the user blank. I’ll look into dealing with both those issues in a future post.

July 23, 2011

Pictures from our Arctic Circle Cruise

Filed under: Uncategorized — Charles Engelke @ 12:02 pm
View of Longyearbyen, Norway from above

Overlooking Longyearbyen

Now that I’m home and have decent Internet access, I’ve posted pictures from our Celebrity Constellation cruise to the Arctic Circle. We visited three Norwegian ports north of the Arctic Circle: Leknes (in the Lofoten archipelago), Honningsvåg (near the north cape and the Gjesværstappan bird sanctuary), and Longyearbyen (above, on Spitsbergen in the Svalbard archipelago).

We also visited two Norwegian ports south of the Arctic Circle: Bergen and Ålesund (where we took a bus tour inland along the Path of the Trolls). And we spent several days before the cruise in Amsterdam, from which we took a day trip to the Hague.

It was all beautiful, and now I have an idea of the difference in degree between different arctic areas. All our visits were to seaside areas, but Leknes (at 68° 08′ N) is coastal and lush…

Leknes

Leknes

… while the North Cape area around Honningsvåg (at 70° 58′ N) is much more sparsely vegetated…

House near Honningsvåg

House near Honningsvåg

… and Longyearbyen (all the way up at 78° 13′ N) is very severe, not only lacking trees but really without any plants more than few inches tall…

Guide at Longyearbyen

Guide at Longyearbyen

… and with polar bears, which is why our guide was armed with a rifle. Unfortunately, we didn’t get to see a polar bear; fortunately, we didn’t get attacked by one, either.

July 5, 2011

Amsterdam Art in the Street

Filed under: Uncategorized — Charles Engelke @ 3:53 pm

Our hotel here in Amsterdam is on Apollolaan, which is part of Artzuid – International Sculpture Route Amsterdam. We were surprised when we got off the tram on the way in from the airport to encounter a man riding a giant golden armored turtle:

Scupture

Searching for Utopia - Jan Fabre

As we continued the one-block walk, we passed by what seems to be an elephant on stilts:

Sculpture

Space Elephant - Salvador Dali

I really liked this kinetic sculpture. It only runs for five minutes each hour, but we just happened by as it was going:

Sculpture

Heureka - Jean Tinguely

I’ve got a one-minute movie of that sculpture, too. Maybe I’ll post it, if I ever figure out YouTube.

This afternoon we tried a different kind of Amsterdam art: microbrewed beer from a 300 year old windmill, Brouwerij ‘t IJ:

Windmill

Brouwerij 't IJ at Windmill

I can confirm that at least three of their beers, especially the Columbus, are also works of art.

July 1, 2011

Relative Performance in Amazon’s EC2

Filed under: Uncategorized — Charles Engelke @ 11:58 am
Tags: , ,

I’ve been using Amazon’s Elastic Compute Cloud for several years now, but a lot more lately. And one thing that has always confused me is the relative benefits of using Elastic Block Store (EBS) versus instance store.  I’ve seen some posts on this, but they all set up sophisticated RAID configurations. What about some simpler guidance for a regular developer like me?

Well, I don’t have the answers, but I have a little bit of new data. I’m updating a site that starts by loading a 1.25GB flat file into MySql, then creating three indexes, then traversing that table to create a second, much smaller table.  Dealing with those 10 million rows is pretty slow, so I decided to see what difference it made using EBS or the instance store. While I was at it I tried different size machines. The results, shown in minutes to complete the task, are summarized in the table below:

Size  EBS  Instance
t1.micro 635
m1.large 56 66
m2.xlarge 47 49
m2.4xlarge 42 40
c1.xlarge 49 49

The t1.micro machine size is only available in EBS, and it got about 90% of the way through (finished creating all three indexes) then died.

This seems to show that (for this kind of operation) EBS performed noticeably but not enormously better than the instance store, but the difference shrank as available memory increased. Also, “larger” machines didn’t help much once there was enough memory available. Not surprising, since this is a single-threaded operation.

June 7, 2011

Web Resilience with Round Robin DNS

Filed under: Uncategorized — Charles Engelke @ 11:42 am
Tags: , , ,

I haven’t been able to find a lot of hard information on whether using round robin DNS will help make a web site more available in the face of server failures.  Suppose I have two web servers in geographically separate data centers for maximum robustness in the face of problems.  If one of those servers (or its entire data center) is down, I want users to automatically be directed to the other one.

At a low level, it’s clear that round robin DNS wouldn’t help with this.  In round robin DNS you advertise multiple IP addresses for a single name.  For example, you might say that “mysite.example.com” is at both addresses 192.168.2.3 (the server in one data center) and 192.168.3.2 (the server in the other data center).  But when a program tries to connect to mysite.example.com it first asks the network API to give it the IP address for that name, this returns just one of those IP addresses, and the program uses that to connect to it.  If that address happens to be for an unavailable server, the program’s request to the server will fail, even if there is a healthy server at one of the other addresses.

Of course, if you write the client program, you can make it work in this situation.  You’d have your program ask the network API for all of the IP addresses associated with a name, and then your program can try them one at a time until it gets something to connect.  But in the case of a web site, you haven’t written the client program.  Microsoft or Mozilla or Google or Apple or Opera or some other group wrote it.

Wouldn’t it be great if those web browsers all worked that way?  Wouldn’t it be great if you could find clear indications that they worked that way?

As it happens, it appears that the all do work that way, even though I have found it very hard to get clear confirmation that they are supposed to.  I’ve found a few web pages that talk about browsers performing “client retry”, but not any kind of specification or promise.  I’ve found many more pages saying to forget about using round robin DNS for this, and to use a load balancer or some other kind of proxy to distribute web requests to available servers.  The problem with that is that you now have a new single point of failure (the load balancer) at a single location.  It can be made very reliable, but can still fail and leave your users unable to connect.  You can change your DNS entry to point to a new location, but that takes time to propagate (even longer than your DNS server says it should in the case of internet service providers who cache those addresses more aggressively than they should).  There are routing protocols to force traffic for a specific IP address to a different location, but they’re too complicated for me and require a lot of low level routing privileges that we can’t expect to have.  No, round robin DNS with clients smart enough to try each address if they need to, would be a real help here.

Since I couldn’t get clear indications that this would work where I need it to, I set up a simple experiment to see how web browsers respond in this situation.  I created web servers in Amazon’s Virginia and California regions, each returning a single web page.  The one in Virginia returns a page saying “Eastern Server”, and the one in California returns a page saying “Western Server”.  I then set up a round robin DNS entry pointing to those two IP addresses.

I opened the web page for the round robin name in the Chrome web browser, and got the page saying “Eastern Server”.  I then shut down the web server that hosts that page, and refreshed the page.  It instantly changed to a page showing “Western Server”.  Which is exactly what I want!  So I checked other web browsers, and every one I could easily check worked the same way:

  • Chrome 11 on Windows 7
  • Firefox 4.0 on Windows 7
  • Internet Explorer 8 on Windows 7
  • Opera 11 on Windows 7
  • Safari 5 on Windows 7
  • Internet Explorer 7 on Windows XP (after noticeable delay)
  • Firefox 4.0 on Windows XP (after noticeable delay)
  • Android native browser on Android 2.3.3
  • iPhone native browser on iOS 4.3.3
Except on Windows XP the refreshes after the server was shut down were apparently instantaneous.  Buoyed by this success, I tried lower level clients.  They worked, too!
  • curl on Windows 7
  • curl on Linux
  • wget on Linux
  • Python with urllib on Windows 7

Wow.  Maybe the operating systems were doing this, not the clients?  No.  wget was talkative, and reported that the connection attempt failed on one IP address and that it retried on another.  And Chrome’s developer tools Network tab showed the same thing: a request failing, and then being repeated with success the second time.  Also, I was able to find an HTTP aware client that did not work this way: Perl with LWP::Simple on Windows 7.

So my conclusion: round robin DNS is not certain to always cause a web browser to fail over successfully when one of the servers is down, but it is very likely to work.  If you want reliability over geographically separate server locations it seems like a good way to go.  When you discover a server is down you should fix it immediately or update your DNS to no longer point to it, but until that happens, most of your users will continue to be able to connect and use your site via one of the other servers.

[Update] When I got home from the office, I tested a few more web clients:

  • Logitech Revue Google TV
  • Chromebook CR-48
  • Samsung Galaxy Tab 10.1 running Honeycomb
  • Nintendo Wii
  • Amazon Kindle

It worked in every case but two. Make that every case but one and half. The Wii browser reported an unavailable page when refreshed with the previously displayed server down.  A second click on the refresh button did cause it to switch to the live server, which I call at least a partial success.  But the Kindle failed completely.  Turning off the server it had connected to and then refreshing the browser got a message about an unreachable page, no matter how many times I clicked the reload button.

So if you’ve got a mission critical web application that you offer through round robin DNS, be sure to tell your Wii users to hit refresh a second time if there’s a page failure.  And warn them to not rely on the Kindle’s web browser (which, to be fair, is still marked as an “Experimental” feature).

May 9, 2011

Handling Large Data

Filed under: Uncategorized — Charles Engelke @ 8:48 pm
Tags: , ,

This is my final session at Google IO Bootcamp this year, and the one I know least about going in.  We’re working with larger datasets and trying to get meaning from them, so there’s a lot of potential for us here.

This is the only session I’ve been in that wasn’t packed.  There are plenty of folks here, but there are empty seats and nobody sitting on the floors.  I’m sure people don’t find this as sexy as Android, Chrome, or App Engine.

We’re starting with the setup, which is pretty complicated.  We have to sign in as a special Google user, then join a group, then download and unpack some files, then go to some web pages…  And I’ve done all that.  Now I guess I’m ready.

We start with Google Storage for Developers, which is geared toward developers, not end users.  You can store unlimited objects within a whole lot of buckets, via a RESTful API.

We do an exercise where we create a text file, upload it, and make it readable to the world.  Then we fetch our neighbor’s file.

Next on to Big Query.  Which, for me, is a disaster.  Getting the tools set up and working is a mess under Windows, even with instructions.  And the meaning of the data we’re querying isn’t clear, making the exercise difficult.  But I got a few things to work.

Finally, we’ll use the Prediction API.  As for the exercises, I’ll try each one once then give up if it doesn’t work.  Messing with the installation and configuration takes my attention away from the actual tools.  Well, I think I’ve set it all up; it says it’s running.  I learned a lot of mechanics here, but don’t really understand what’s going on.  It should take about 5 minutes to do the prediction run I’m trying, and then I’ll see if I can make sense of the result.

Well, the result was “error” after 10 minutes of crunching.  I guess I’ll try it again, perhaps from a Linux box, someday.

That concludes IO Bootcamp this year.  All in all, it was well worth attending, even though I already knew some of the material.

HTML5 Development with the Chrome Dev Tools

Filed under: Uncategorized — Charles Engelke @ 6:47 pm
Tags: , ,

This is the session I’ve been most looking forward to here at IO Bootcamp.  I use these tools all the time, but I know I’m missing out on a lot that’s available.  We will be working with the examples at goo.gl/FFEmd.  That includes a TODO web application, and the slides from today’s talk.

We start with using Chrome to change CSS styles in the app.  I have no sense of design, so I just make things uglier.  But the various transforms and transitions are cool.  I delete items by having them slowly shrink to nothingness.

Moving on to feature detection, seeing what APIs are available to your web page.  The next exercise uses Modernizr to see if the Geolocation API is available (of course, it is in Chrome), and then we add use of it to the JavaScript for the page.  Just edit the script in the dev tools, click Ctrl-S to “save” the changes, and refresh the page.  When you’re done, go to the Resources tab, look at every version of the resource you’ve saved, and right-click to save the modified script to a file on your PC.  Very nice.

Next, profiling.  We use the Profiles tab in the dev tools and start by taking snapshots and comparing their differences.  Then start profiling by clicking on the circle, do some things, and stop the profiler by clicking the circle again.  You get a list of how much time each part of your code was running during the test.  The half second busy loop inserted in the example really eats up a lot of the application’s time.

You can set breakpoints in your code the normal way (by selecting the code to break on), even with conditional breaks.  But you can also set breakpoints on code that handles events, callbacks for XHR, and even on changes to DOM elements.  That last is really going to be useful.  But, things are going fast here and I’m not fully keeping up.  Still, knowing what’s there will let me seek out the details later.

This has been a nice session, even if I didn’t always keep up with the presenter.  I eventually covered it all, and have more tools available to me when I get back to work.

Google App Engine Workshop

Filed under: Uncategorized — Charles Engelke @ 4:44 pm
Tags: , ,

Our first lab of IO Bootcamp.  We have a dual-screen display up front, Java on the left, Python on the right.

[later]

Well, that kept me busy!  A good introduction leaving no time to blog.  See https://sites.google.com/site/gdevelopercodelabs/app-engine/python-codelab for the labs I did.

Google TV talk

Filed under: Uncategorized — Charles Engelke @ 2:34 pm
Tags: , ,

The Google TV session is just a talk, not a lab.  Based on a show of hands, it seems that the ratio of Android to web developers in the room is about three to one.  It’s clear that this talk is much more focused on Android development for Google TV, which has not been possible for regular developers yet.  And they aren’t going to announce any way to do it today.  Guess that’s for one of the keynotes at Google IO this week.  The Wednesday session on developing Android apps for Google TV is sure to tell how to deploy these apps.

Google TV isn’t intended to replace your cable connection; it’s to bring new content to your TV.  (Of course, lots of people I know want to replace the cable connection for existing content, and I think that’s inevitable.  But I guess Google doesn’t want to pick a fight with the entrenched providers.  Yet.)

We’re not going to hear about the next version of Google TV, but there are allusions to it, and how it has taken the feedback from the first year to heart.  The current Google TV is like the G1 Android phone – a first generation to prove what does and does not work.

So what’s different in developing for Google TV instead of mobile platforms?  There’s no touch screen, but there is a mouse (or mouse-like device).  There are only two important resolutions (1920 by 1080 and 1280 by 720), and only a landscape orientation.  And large icons, controls, and especially, large font sizes work best.  Don’t overload the users with too much information.  Even though we will focus on developing Android apps for Google TV, the speakers emphasize that Web apps are very often an excellent choice there.

You develop Android apps for Google TV with the same tools as developing for mobile.  You can configure the project parameters and emulators to have the same characteristics as a TV, using version 7 of the Android APIs and HDPI or XHDPI for screen resolution.  The “abstracted density” they recommend is 231 dpi.  The real density is much lower, but you view the screen from much further away so the density appears higher.  And thanks to TV overscan (a holdover from CRTs) you probably won’t see the whole screen, losing perhaps 10% of the screen at the edges.

Google App Engine Overview

Filed under: Uncategorized — Charles Engelke @ 1:23 pm
Tags: , ,

I’m going to do a little bit of Google IO blogging this year.  Probably not during the main event (there’s too much going on to stop and write about it, and I probably won’t even bring a notebook to it), but during Bootcamp I’ll be doing hands-on exercises and may have time to take a few notes here.

Here at Google IO Bootcamp, my first session is an App Engine overview (or is it AppEngine?).  I’ve used it before, but not for two or three years.  Well, actually my personal web site (engelke.com) is hosted on it, but doesn’t use any of its capabilities.  There should be some new stuff here to learn.

App Engine is a cloud computing offering, specifically Platform as a Service (PaaS).  Other kinds of offerings would be Software as a Service (SaaS) such as Google Apps or Salesforce.com, and Infrastructure as a Service (Iaas) like Amazone EC2 or Rackspace.  I love the idea of PaaS, and really want to find ways to use it, both personally and in my company, but the endpoints (SaaS and Iaas) are easier to get into.

App Engine is getting pretty heavily used.  In fact, it serves 1.5 billion page views per day.  I know one of the Royal Wedding sites was run on it.

You can create an App Engine application using either Python (the first supported language on it) or Java.  I know there are people who have run other languages on App Engine’s JVM (such as JRuby, Scala, Groovy, and others, even Jython if you want to use Python but access Java classes), but that’s too tricky for me to want to mess with.  Personally, I’ve always used Python while trying out App Engine.  It’s not a strong suit for me, but it’s a nice language and a good environment for it.

There’s an Eclipse plug-in for App Engine development, which I have installed, but I don’t much like IDEs.  I prefer a command line and text editor, and that’s how I went through the on-line tutorial to prepare for today.  Actually, I used the GUI Launcher they now offer, but still kept in my preferred text editor.

April 15, 2011

MongoDB Windows Service trick

Filed under: Uncategorized — Charles Engelke @ 12:00 pm
Tags: , , , ,

I just spent a lot of time trying to reinstall MongoDB on my Windows 7 machine because I wanted to turn authentication on.  (I don’t really feel a need for authentication in this development environment, but it seems access from outside localhost requires it for Windows.)  Every time I installed the service, it seemed to work fine.  But when I tried to start the service, I kept getting an error message: The system cannot find the file specified.

What file can’t it find?  I could run the server from the command line, why not as a service?

It turns out that the option to install a service apparently reads the name of the executable to install from the command you use to install it.  So, if you happen to be in the same directory as your mongod executable, and use the command:

mongod –install –auth –dbpath “somefolder” –directoryperdb –logpath “somefile”

the service is installed with just mongod as the executable, not mongod.exe.  In fact, you need to run the installation with the fully qualified filename, including the extension.  And put quotes around it if there are any spaces in the path.  In my case, that was:

“C:\Program Files\MongoDB\bin\mongod.exe” –install –auth –dbpath “somefolder” –directoryperdb –logpath “somefile”

That worked.  I’m putting it here because I found hints of this via searching, but they all left at least one piece I needed out.

March 9, 2011

Typical gray London day?

Filed under: Uncategorized — Charles Engelke @ 4:15 pm

This is not what I was expecting this week!

St. James's Park across from Buckingham Palace

St. James's Park across from Buckingham Palace

February 19, 2011

Mediterranean Vacation Pics – Italy

Filed under: Uncategorized — Charles Engelke @ 9:01 pm

More catching up on organizing older vacation photos.  Today I started on the pictures from the Mediterranean cruise we took in late 2009.  The cruise went from Rome to Athens, visiting many Greek and Turkish ports, Cyprus, and a full day in Egypt to see the pyramids.  We spent an extra day in Rome before the cruise, and several days in Athens afterwards.

Digital cameras with large memory cards sure have changed vacation snapshots.  Laurie and I apparently took almost 4000 photos over three weeks, so it’s going to take some time to select the reasonably decent ones.  Today I sifted through the shots for Rome, Pompeii, and cruising by Stromboli.

In Rome for a day, we walked through the city by the Trevi Fountain and into the Pantheon, perhaps my favorite building:

Interior of the Pantheon

Interior of the Pantheon

Most of the day we spent walking through the ancient Forum:

Roman Forum

Ancient Roman Forum

The cruise’s first port was Sorrento, and we went on a tour to Pompeii:

Pompeii

Pompeii

We had a day at sea on our way to Greece and Turkey, and cruised by Stromboli, one of the few active volcanoes in Italy near Sicily:

Stromboli volcano

Stromboli volcano

February 16, 2011

IE9 and Web Apps

Filed under: Uncategorized — Charles Engelke @ 12:15 pm
Tags: , , ,

Yesterday, Paul Rouget, a Mozilla tech evangelist, wrote a blog entry stating that IE 9 is not a “modern browser”.  Not long after that, Ed Bott tweeted that the post was “surprisingly shrill”.  Several folks (including me) responded that the post made important points, and Bott asked for specific examples of real web sites that used the HTML5 features that IE9 is missing.  (I’m using “HTML5″ to refer not only to the language itself, but also to the new APIs related to it.)

That’s hard to do, especially in a tweet.  If the most widely used web browser doesn’t support these features, even in its upcoming newest release, how many mainstream sites can use them?  They’ve been added to the HTML5 spec because there are strong use cases for them, and when users have browsers that support them sites can start taking advantage of them.  Of course, there are some sites that use these features, but Bott specifically said he didn’t want to hear about pilots or demos, which excludes a lot of them.

There’s a chicken and egg problem here.  We can’t make heavy use of HTML5 features in web sites unless web browsers support them, and Ed Bott seems to be saying that the upcoming version of IE9 doesn’t need to support them because they aren’t yet widely used.  That kind of problem is part of what stalled HTML and browser advances ten years ago.  The WHAT WG didn’t accept that, and pushed for what became HTML5.  I think that Google was a major help because it had the resources to improve browsers (first with the non-standard Gears plug-in, later with their standards-based Chrome web browser) in order to be able to develop more sophisticated web applications.  Their experimental ChromeOS devices like the CR-48 show that Google is still very interested in the idea that the browser can be an application platform, not just a viewer of web sites.

For me, IE9 is most disappointing because it fails to implement many key HTML5 features that are essential to building good web apps.  (I use “web apps” to mean platform independent applications that live and run inside a modern browser, including many mobile browsers.)  Yes, IE9 makes a lot of advances and I appreciate them all, but some of what it leaves out is essential and does not seem nearly as hard to implement as some of what they included.  Consider some use cases that I actually encounter.

In a traditional web browser no data persists in the browser between separate visits to a web page.  If I want to start working on something in my web browser and then finish it later, the browser has to send it to a server to remember it, and when I revisit the page in the future it has to fetch that information back from the server.  But what if I don’t want to disclose that information to the server yet?  Maybe I’m preparing a tax form, and I don’t want to give a third party a history of all the changes I’m making as I fill it out, I just want to submit the final filled-out form?  In a traditional web browser I can only do that if I perform all the work during a single page visit.

If only the browser could store the data I enter within the browser, so I could come back and work on the form over multiple visits without ever disclosing my work in progress.  Actually, HTML5 (and related technologies) lets you do that.  Web storage (including local storage and session storage), indexed database, and the file system API can each meet that need.  (So can web SQL databases, but that approach will likely not be in any final standard.)  Of these solutions, only web storage is widely available today.  It’s on all major current browsers, including IE8 and IE9.  Good for IE.

Now, suppose I want to work on my tax form and I don’t have an internet connection.  The data I need is in my browser, so shouldn’t I be able to do this?  If my web browser supports application cache, I can.  Every major web browser supports this, and most have for the last several versions of them.  Except for IE.  Not only does IE8 fail to support this, so does IE9.  If I try to work on my tax form in IE9 I’ll just get an error message that the page is unavailable.  Even though all the functionality of my tax form program lives inside the web browser I can’t get to it unless the server is reachable.  That’s a problem for an app.  This is my biggest disappointment with IE9, especially since application cache seems like a pretty easy extension of the caching all web browsers, including IE, already do.

But you might ask, so what?  This is a web app, and it’s not that big a problem if it only works when the server can be reached.  After all, it’s going to have to talk to that server sooner or later in order to submit the tax form.  But let’s switch to a different use case.  Suppose I want to do some photo editing.  The HTML5 canvas API gives me a lot of ways to do that.  I gave some talks last summer on HTML5 techniques and built an application that could resize photos and convert color photos to black and white or sepia toned.  The whole example took less than an hour to do.  This is an application that doesn’t need to ever talk to a server except for the initial installation.  It’s something that I could use on my machine with any modern web browser, so I can write it once and use it everywhere.  There are two big challenges for this application, though: getting photos into the code in my browser, and then getting the edited photos back out.

There’s no way to do that in an old-fashioned web browser.  If I’ve got a binary file on my PC and want to get it to the code in the browser, I have to use a form to upload that file to a server.  My browser code can then fetch it back from the server.  It goes through the browser to get to the server, but is inaccessible to code running inside the browser.  With the HTML5 File API, I no longer have that restriction.  I can select a file with a form and the code in the browser can directly read that file instead of sending it to the server.  That’s how I get a photo into my application.  Every current major browser supports the File API except for IE and Opera.  And Opera might add it in their next version (they haven’t said), but IE9 won’t have it.

Once I’ve edited the photo I need to get it back out.  What I need is either an img element (so the user can right-click and choose to save the image) or a simple link that user can click to download the image.  The problem here is that for either of these methods to work, the photo has to be in a resource with a URL.  How do I get it there?  In an old fashioned web browser, the code in the browser would send it to a server, which would save it and make it accessible at some specific URL.  Once again, my browser ends up having to send something to a server so that the browser code and browser user can share something.  With a Data URL, I can create a resource with a URL inside the browser so that no server is needed.  Data URLs are a lot older than HTML5 and have been supported in all major browsers.  However, until recently IE limited their size so much as to make them not very useful.  IE9 does allow large Data URLs, though.  Again, good for IE9.

So, for these use cases we need four key technologies: persistent storage in the browser, offline access, reading files, and creating resources and URLs for them in the browser.  Every modern web browser supports all of them (assuming the next version of Opera adds the File API).  IE9 supports only half of them, and can’t serve either use case.

That’s one reason we should not consider IE9 to be a “modern browser”.

February 13, 2011

Alaska Cruise Pictures

Filed under: Uncategorized — Charles Engelke @ 7:05 pm
Tags: , ,

Last weekend I did our taxes.  This weekend I organized photos from the Alaska cruise we took in July and August 2009 and posted selected ones on my Picasa web albums page.

View in Skagway

Morning in Skagway

They’re organized by port; we visited Ketchikan, Skagway, Valdez, Seward, Kodiak, Hoonah, and Juneau.  There are also photos from our day in Glacier Bay, and the Princess Cruise’s Chef’s Table dinner during a day at sea.

Glacier Bay from the deck

Overlooking Glacier Bay

One truly bizarre thing about this cruise was that Laurie and I were among the most active folks on the ship.  We went ziplining in Ketchikan and Juneau:

Zip lining

Zip lining near Ketchikan

Rock climbing near Skagway:

Rock climbing

Rock climbing

And hiked on a Glacier near Valdez:

Glacier hike

Glacier hike

There was one other passenger along with us for one ziplining outing, one couple for the other zipline, and that passenger and couple for the rock climbing.  The glacier hike was better attended, though.

February 9, 2011

Source Control Basics, by Example

Filed under: Uncategorized — Charles Engelke @ 3:52 pm
Tags: , ,

Many non-developers understand the value of source code and realize that a source control system such as Subversion is extremely important, but don’t really understand how it should be used.  To a lot of people, it’s just a safe used to lock up this important asset.  But really, it’s a much more valuable tool than just a safe.  I’m going to try to describe how it can be used to aid release management, support, and maintenance of products by example.  These examples use Subversion, but the general principles apply to most source control systems.

Core principles

Subversion doesn’t manage each file, it works on an entire directory tree of a files at a time.  That’s a good match for source code.  If you start with an empty subversion repository, you can check it out to a working copy on your own computer, and then start adding your source files and directories to that working copy.

  • repository: the area on a Subversion server where every version of your source code directory tree is stored.
  • working copy: a local folder on your computer where the version of the source code you are working on is kept.

Whenever you want, you can commit your working copy to the repository.  In effect, Subversion stores a snapshot of your source code forever.  You can get a log showing every version that was ever committed, and you can check out a working copy of any version you want, at any time.

  • commit: make the Subversion server keep a snapshot of the source code that matches your current working copy.
  • check out: create a new working copy from any desired snapshot that Subversion has available.  Usually this is based on the latest snapshot, but doesn’t have to be.

Subversion simply numbers each version, or revision, sequentially, so you’ll see versions 1, 2, 3, and so on.  I recently noticed that one of our six year old projects is up to revision twelve thousand and something.  That means that on average, a new snapshot was saved once each business hour over the life of the project.

Before I move on, there are two more points to mention.  First, you don’t have to check out and commit the whole repository at a time.  You can work with any subdirectory you want.  That’s good for dividing up different kinds of work in a project that have little interaction, and it enables the management techniques I’ll be talking about in a minute.  Second, you can’t really commit “whenever you want”.  You can only commit if nobody else has changed the same files you changed since your last checkout.  Otherwise, you need to do another checkout first, and possibly manually resolve any conflicts between your changes and the other folks’ changes.  That sounds like a potential problem to a lot of people (including me) but in practice it works great.

Handling a Release

When you’re ready for a release, all you need to do is note the version number you’re building and packaging from.  That way, if you need to get that exact code back for support or maintenance, it’s extremely easy.  But it could be even easier.  Since you can work on subdirectories of your repository instead of the entire thing, just structure it a bit differently.  Don’t put your source code at the repository root, but in a subdirectory.  That subdirectory is conventionally called the trunk.  To do this, when you first create the repository immediately create a subdirectory called trunk.  Then instead of ever checking out the whole repository, just check out the trunk subdirectory.

The advantage of this is that you can now create a directory sibling to trunk, which will contain copies of all your releases.  By convention, this directory is called tags.  When you are ready to release your code, you copy the entire trunk directory tree to a new child of the tags directory.  Let’s say this release is going to be for 2.1beta2.  The your repository will look something like:

Repository
   |
   +--trunk
   |    |
   |    +--your latest source tree
   |
   +--tags
        |
        +--2.1beta2
              |
              +--snapshot of trunk contents at time of release

Don’t worry about the storage needed to keep this new copy.  Remember, Subversion already needs to keep track of every version of your source tree, and it’s smart enough to store this new “copy” of a snapshot using almost no actual storage.  But even if it needed to use up enough space for a whole new copy, it would be worth it.  Storage is plentiful, and anything that helps you manage the history of your product’s source is priceless.

  • trunk: the subdirectory of your repository containing the current version of your source code (and every prior version, too).
  • tags: the subdirectory that contains other subdirectories, each of which is a copy of a particular version of the trunk.  Each subdirectory should have a meaningful name, and should never be updated (Subversion allows you to check out and update tags, but you should not do it).

Software Maintenance

Everything up to now is useful, important, well-known and widely followed.  But the next step, using source control for more effective software maintenance, seems to be less used, even among seasoned developers I’ve observed.  That’s a shame, because it’s easy to do and a big win.

Suppose you released your software a few weeks ago, and now a user reports a bug.  How are you going to fix it?

You could use your current working copy of the trunk, find the problem, fix it, and then do a build and package from that working copy.  Wait!  You’re using tags now, so you create a new tag that’s a copy of the trunk, and then build and release from that tag.

What’s wrong with that?  Well, your new release doesn’t contain the fixed version of the old release, it contains a fixed version of your trunk.  And that trunk probably has had all sorts of changes made to it in the weeks following the release that contained the bug.  It probably has some new errors in it.  It may even have partially finished new functions and other changes in it.  Even if you work hard to make every build green (passing all tests), you are risking pushing out new errors as you fix the old one.

What you should do instead is make the fix to the exact code you released (which is available in the tag).  Then you’ll know that the only changes between the prior release and your new corrected release were those needed to repair the reported problems.  New functions, restructured code, and other changes that you need to be making in the trunk, won’t affect the bug fix release.

We want to keep each tag frozen, representing exactly what we released.  Sure, we could update it and remember to go back to the proper version when we need to, but its a lot easier to avoid problems if tags aren’t changed.  So we deal with maintenance using branches.  A branch is pretty much like a tag, except that it is generally a copy of a tag, not the trunk, and it is intended to change.

  • branch: the repository subdirectory that contains other subdirectories, each of which is a copy of a tag.  Each subdirectory will be updated as needed to make fixes in the release represented by the tag.

Specifically, you will create a subdirectory of the repository called branches, then copy the 2.1beta2 tag to a subdirectory of branches.  Say you call it 2.1beta2-maintenance.  Next, you will check out a working copy from that branch and do your programming work on it to fix the bug.  As you work on it you commit your changes, and when everything is ready, copy the latest version of the branch to a new tag, perhaps 2.1beta3 (or even 2.1beta2-patch1).  Build the new release from that tag and send it to your users.  You’ve fixed their bug with the least possible chance of creating new problems that didn’t already exist in their release.

Merging Fixes

There’s just one big problem.  The next time you do a new feature release, from a tag copied from the trunk, your fix won’t be in it.  You did all the work on a branch, instead.

Subversion (and other, similar tools) make it easy to solve this problem, too.  You can get a report showing every single change you made on the branch, and then use that report to make the same changes to the trunk.  In fact, Subversion can even make the same changes for you.  This isn’t just copying the changed files from the branch to the trunk, because each of them may have been changed in other ways while you were working on the branch.  This is just looking at what was changed in the branch (delete these lines, add these others) and making the same changes to the trunk.  With luck, the trunk hasn’t diverged so much that the same changes won’t fix the problem there, too.  But if it has, so what?  You’re a developer, and using your head to figure out how to make the same effective changes without messing other things up is one of the things you’re being paid for.

Some people really worry a lot about the potential for duplication of effort in making a fix on a branch and then having to recreate the same fix on the trunk.  But in reality, this rarely requires any thought at all; the automated tools handle it perfectly.  And when they don’t, it’s still just not very hard to do it in both places.  This approach to branching and merging works much better than making the whole team roll back their work in progress, or freezing their changes, while you make a fix.  And it’s one of the biggest wins in using source control.

Summary

Source control tools like Subversion help you keep on top of exactly what source code went in each and every release.  Used properly, they also give you a way to do maintenance fixes with the least possible risk of new problems or errors creeping in.  They cost little or nothing to buy, and require very little effort to run, support and use them.  There are a lot of other ways they help developers, too (comments on the reason for each revision, seeing what was changed at the same time, and knowing who did what if you have a question).  For a manager who wants to know how the team can deal with fixes for multiple releases in an efficient and safe way, understanding tagging, branching and merging as described here are essential.

Last Day at StrataConf

Filed under: Uncategorized — Charles Engelke @ 11:28 am
Tags: ,

It’s been almost a week since StrataConf ended, but I’ve been busy recovering from the travel and catching up.  Before I forget too much about the last day, though, I want to get my notes down here.

The day opened with a bunch of short “keynotes” again, just like Wednesday, and they were of highly variable value (also just like Wednesday).  Ed Boyajian of EnterpriseDB presented nothing but straight marketing material, a commercial that I think influenced no one.  But DJ Patil of LinkedIn gave a very interesting talk focused on hiring extremely talented people and helping them do their best work, and Carol McCall of Tenzing Healthcare gave a not only interesting, but inspiring talk about how to start fixing the mess our country has made of healthcare (video here).

The day was shorter than Wednesday, but still pretty long, ending at about 6:00PM.  I felt the sessions were, overall, weaker this day than on Wednesday, but they closed extremely strong.  The panel on Predicting the Future, chaired by Drew Conway and with short talks from Christopher Ahlberg,  Robert McGrew, and Rion Snow, followed discussion, was fantastic.  The format of short talks to set the stage for the panel worked great.

All in all, StrataConf was eye opening to me.  I had very little background in using data these ways, and now I feel ready to explore much more deeply on my own.  Many of the presentations and some videos are available online, and they’re worth a look.  And if you ever get a chance to attend a talk by Drew Conway, Joseph Turian, or Hilary Mason, I recommend you take it.  They each have a lot of interesting things to say, and they’re very good at saying them.

« Previous PageNext Page »

The Rubric Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 33 other followers