Charles Engelke's Blog

July 20, 2010

OSCON Cloud Summit

Filed under: Uncategorized — Charles Engelke @ 1:31 pm
Tags:

It’s my first day at OSCON, and I’m starting off with the Cloud Summit.  This post isn’t going to be real reporting or notes on the entire session.  Much of the material isn’t new to me, so I’m just going to record things new to me that I want to be able to remember and find more about later.

The introduction is by Simon Wardley, giving background and introducing the speakers and topics we’ll see the rest of the day.  He compares Cloud Computing to the evolution of electricity as a utility, and references Douglas Parkhill’s 1966 book The Challenge of the Computer Utility.

Mark Masterson is talking about cloud computing and the enterprise.  Risk is the likelihood of failure times the cost of failure.  Business has almost totally focused on the first term in the equation, trying to reduce the likelihood of failure, because they viewed the cost of failure is more or less a fixed amount.  But it’s not, and cloud computing is one thing that can reduce that cost.  If you focus on reducing the odds of failure, you have to lock in your decisions very early, and can’t explore some options.  If instead you accept a higher chance of failure, and pursue options that have a low cost of failure instead, you give yourself more options.  And you manage your risk just as well as the other way.  (This sounds a lot like agile development methodologies to me; don’t try so hard to not fail, instead fail early and cheaply and correct your path as you go.)

Subar Kumaraswamy of eBay is talking next, about Security and Identity in Cloud.  He starts with a slide showing a frightened manager, surrounded by scary orange dangers of using cloud computing: Compliance, Rogue Administrator, Governance, and many others.  But this is followed by a slide of a focused and contented developer, surrounded by soothing green virtues of cloud computing: iterative in hours, self service, empowered developers, and more.  The rest of his talk is focused on one key area: identity management.

On to John Willis.  “Cloudy Operations.”  In a cloudy world, the prime constraint should be the time it takes to restore your data.  Because you can rebuild all the infrastructure on demand, even in case of a disaster.  But you can’t be up and running until your instance data is live.

“Cloud Myths, Schemes and Dirty Little Secrets” by Patrick Kerpan up next.  Focus has been on making the physical virtual – Citrix on the desktop, VMWare on the server or Amazon EC2.  Dirty little secrets.  Ephemeral versus persistent images.  Ephemeral images disappear, but persistent ones too easy to treat like metal, errors accumulate, history of changes lost.  If you have a lot of data in the cloud, you can’t get it out quickly.  State is the monster (sounds like the last talk).

What is generally happening around us now is the beginning of the long, slow migration of the traditional enterprise to agile infrastructure; whether public, private or hybrid cloud.

The last talk before the break is called “Curing Addiction is Easier” by Stephen O’Grady.  Everybody has an API, but what about interoperability?  He mentioned Delta Cloud initiative (or maybe that’s δ-cloud), which is a layer on top of the API, for that reason.  No one has portability, but it’s not really a problem.  Most businesses want a solution now, and will defer worrying about tomorrow if they can solve their problem today.

That concludes the Cloud Scene theme for the day.  Not a lot of new stuff for me, but it’s supposed to set the scene and get us all on the same level.  Between the break and lunch we’ll see some Future Thinking.

May 18, 2010

Google Chrome Extensions 101

Filed under: Uncategorized — Charles Engelke @ 11:48 pm

My second Google IO Bootcamp session, presented by Antony Sargent.

I hope I can find his slides on-line. They are based on this HTML5 presentation that’s been floating around. At least, it looks like it.

Chrome Extensions are written using HTML, CSS, and JavaScript. HTML5 is giving you many more capabilities already, even without writing new extensions. Native code is coming; it runs in sandbox of the browser. But that’s not what we’re talking about now.

There’s a gallery of extensions at chrome.google.com/extensions. In general, you can post any kind of extension there. No review process unless your code uses certain restricted APIs (in which case you’ll have to sign something indemnifying Google).

Extensions are easy to install. Click on a link and download.

Writing extensions is “Like writing a website” (where all your users have a really fast, standards-compliant browser). But they add some simple APIs for things a regular web page can’t do.

  • bookmarks
  • history
  • i18n
  • processes
  • tabs/windows

Start an extension with manifest.json file

{
   "name" : "Sample manifest file",
   "version" : "1.0",
   "permissions" : [...],
   "description" : "My first extension",
   "icons" : {
      "48" : "icon-48.png",
      "128" : "icon-128.png"
   }
}

Permissions is an array of permissions you need. For example, you can say you want to use cross-site Ajax requests.

You develop extensions right in Chrome itself. You need a text editor, nothing else special.

Enter the URL chrome://extensions/ (or use a menu choice to get to Extensions manager). Click on “Developer mode” to get buttons for development. Put your work in a single folder, and point to it here.

“Isolated Worlds”. Your content script for your extension can see the DOM, but not JavaScript locals in your own page.

Introducing the background page. Not displayed anywhere, but “rendered” by the browser in a hidden page. This is where you’re going to register listener functions, etc. There are APIs for this background page and your content page to communication with each other.

Much of the API is asynchronous.

Experimental APIs need to be enabled with command line options when launching Chrome.

We went very fast and covered very detailed items without enough time, but detailed documentation is all on-line. Seems easy for web developers. I’m not sure why I want to do this, though.

Android 101: Writing and Publishing Android Applications

Filed under: Uncategorized — Charles Engelke @ 6:43 pm

I’m attending the first IO Bootcamp at this year’s Google IO. It’s just starting, and my first session is Android 101, presented by Roman Nurik from the Android Development Team. These are my very rough notes.

This talk is about writing your app and then publishing it (hence the name).

Writing your App

  • platform features
  • surfacing your UI
  • intents (“awesome” pay attention)
  • speech, location, sensors
  • native development

Publishing your App

  • registering
  • targeting
  • buying and selling
  • other distribution

Android is an open platform, and open source. So is the SDK.

Mostly Java and XML (C/C++ available for native development with NDK).  It’s not native Java bytecode. The bytecode is further compiled to “Dalvik”.

You can replace the core apps with your own.  See http://source.android.com. Almost all (the speaker thinks all) apps are using only public APIs.

More than 60,000 Android devices are shipping each day.

There are many different devices with different screen sizes, resolutions, and pixel densities. Grid of 9: small, normal, large screen by low, medium, high dots per inch.

As of May 17th (prior 14 days), almost nothing with small/low dpi, all the rest with normal screen size. About 2/3 medium pixel density, 1/3 with high density.

An app is collection of several components, defined in AndroidManifest.xml.

  • screens, services, programs, types of things it listens to, types of content it provides
  • Activities
  • Services
  • Content Providers
  • Broadcast Receivers

Surfacing your UI

Launcher icons, status bar notification, widgets (very little code, mostly just a simple UI), quick search box (since 1.6), live folders and live wallpapers.

Concurrency

Users can multitask. Their apps get paused, not closed.

Background services are invisible apps with no GUI, but unobtrusive event notifications.

Intents

Pay attention! Intents “link” activities, services, and receivers.

Consist of

  • an action (e.g., ACTION_VIEW)
  • categories (e.g., CATEGORY_DEFAULT)
  • a URI (e.g., content://contacts/people/123)
  • “Extras” metadata
  • Can also be hard coded class names (com.foo.FooActivity).

Example: press a button. Creates and intent, Uses a URI. You launch the activity. System looks for everything that can handle that intent. “Do you know how to edit a contact?” (Answer is in manifest.)

Intents are both intra- and inter-app.

Example apps: WHERE and OpenTable. WHERE finds a restaurant. You can click “reserve now” in WHERE, and it will fire an intent to OpenTable to make the reservation. Note that WHERE only shows this button if there’s some app installed that can handle the intent.

“Shortest introduction to the important parts of Android” that he’s ever done (less than 30 minutes).

Speech Input

  • voice to text, processed by Google returned to device (and app).
  • Fire an intent. Also every text input box has this automatically with the microphone icon.
  • Understands English, Mandarin, and Japanese.

Location and Mapping
LocationManager service that determines location and bearing.  App can register for periodic updates by time and distance.

Google Maps library

Be sure to target the SDK with maps enabled.

Example: Places directory. Shows you distance and compass bearing. Then can view in Map View, which is very similar to the Google Maps applications.

Hardware and Sensors

  • Camera
  • Microphone
  • Accelerometer
  • Compass

Examples: Layar, Google Goggles

Native Development Kit (NDK)

  • Use in conjunction with the SDK (I guess, not “instead of”)
  • Performance critical code
  • C/C++
  • Also re-use existing code

developer.android.com is the place to get everything. Code, information, it’s all there.

Download the SDK,
Install Eclipse and ADT (Android Developer Tool) plug-in
Look through tutorials and samples
Run them on the emulator or a real device

Strong developer community.

Publishing your app

Register at market.android.com/publish/ to use the market. Has a nominal fee. ($25 when I checked just now.)

Takes only a few minutes from uploading your app until it’s available to all.

“Don’t update too often because then your end users will be pretty pissed.”

Targeting Options:

  • Device capabilities (SDK level, minimum and maximum, Screen size
  • Location
  • Target countries
  • Operators (if applicable, generally discourage this)
  • Matches to user’s service provider

Only certain countries for sellers (9 including US, UK, Japan, some EU).  4 currencies only ($, pounds, euro, yen).

Only a dozen countries for buyers, who must have Google checkout account.  Prices in seller’s currency. Then confirmation, then estimated price in buyer’s currency, then choose payment method.

Can also distribute via USB, or downloading from any website.

Application is in a .apk file, can put it on any site. Users visit the URL of the .apk file, and Android will install it (if “Unknown sources” is checked in device settings).

Very high level overview. I want to understand the specific mechanics of intents a lot better.

April 26, 2010

WS-REST 2010

Filed under: Uncategorized — Charles Engelke @ 10:56 pm

I attended WS-REST 2010 today, the First International Workshop on RESTful Design.  It was a one day affair, co-located with WWW2010, in Raleigh, North Carolina.  I haven’t been to an academic conference in decades, pretty much since I left academia, and it was an interesting experience.  It was somewhat different in tone and focus than the kinds of conferences I usually attend, giving me a different perspective on the material.  It was quite worthwhile.

Craig Fitzgerald and I actually wrote a short paper that the workshop accepted.  I presented it today, on Replacing Legacy Web Services with RESTful Services.  My presentation slides are posted at the conference program page.  The paper itself will be available at the ACM Digital Library, though I can’t find it there yet.  There will be a charge to download it for non-ACM members; since ACM owns the copyright, I can’t post it myself.  However, as I read the copyright terms, it seems that I can provide copies on request.  If you’d like one, post a comment here or e-mail me at restpaper@(this web page’s domain), and after I check with our firm’s legal department to make sure it’s okay, I’ll send you a copy.

April 23, 2010

Browser Performance

Filed under: Uncategorized — Charles Engelke @ 8:13 am

I was talking to a colleague the other day about the performance of web software and I mentioned that there was a big difference in how fast different browsers were. I just dug up some recent benchmarks and figured they’d be of interest.  I’m particularly interested in how extremely high-end JavaScript intensive applications perform in different browsers.

First, IE 6 versus 7 versus 8. There’s not much recent, because benchmarkers are focusing on IE 8. But I did find an article from last year at PC Games Hardware. It shows IE 6 and 7 performing about the same, but IE 8 being nearly twice as fast.

Tom’s Hardware is one of the most careful sites about reviews, and its recent article is excellent. On JavaScript performance, one benchmark shows Google Chrome as 30 times faster than IE 8. Firefox is nearly 5 times faster than IE 8. On others, the Chrome:IE speed ratio was about 6:1, 3:1, and 10:1. The Firefox:IE ratios on those tests was about 3:1, 1:1, and 6:1. On tests involving the DOM we see Chrome:IE of 5:1 and Firefox:IE of 4:1. For CSS the benchmarks show Chrome:IE of 12:1 and Firefox:IE of 2:1.

A (possibly self-serving) set of tests by the Opera browser company shows results mostly consistent with Tom’s Hardware.

A less detailed but very good summary of a bunch of tests is available at Six Revisions. Click on the chart to see it better. The JavaScript speed, DOM selection speed, and CSS rendering speed results probably best reflect how high-end intensive web applications will perform.

April 9, 2010

Apple’s Control-Freakiness

Filed under: Uncategorized — Charles Engelke @ 9:15 am

With the new iPhone SDK 4.0, Apple puts even tighter control on users and developers.  It shouldn’t be a that surprising.  Woody Allen showed us 40 years ago what happens when the idealistic and charismatic revolutionary finally wins:

April 7, 2010

iPad Out-of-Box Experience

Filed under: Uncategorized — Charles Engelke @ 7:07 pm

I just got an iPad for trying things out at work.  I’m excited about it, especially as a web client.  But the out-of-box experience is pretty lousy.  I’m surprised I haven’t seen others talking about this.  It’s been over half an hour, and I’m still not actually using the thing!

First off, I opened the box, unwrapped the iPad, and pressed the Home button.  What happens?  The screen shows a sync cable and says “iTunes”.  Then I looked at the instructions in the box, and find out that before the thing can do anything at all, I have to sync it with iTunes.  Which is ridiculous.  Hasn’t Apple ever heard of the cloud?

So I download iTunes.  All 93.8MB, taking about 15 minutes.  (My home Internet connection is much faster than that; I guess Apple’s bandwidth is swamped.)  Of course, to download it I had to check that I agreed to all the terms and conditions.

Okay, now to install it.  I have to agree to more terms and conditions to do that, but finally it installs.  Even though I downloaded the 64 bit version, the installer says it’s going to install into Program Files (x86), where 32 bit programs go. instead of Program Files.  And though it installed something in the right location for 64 bit programs, it put a lot of stuff in the 32 bit directory.

Finally, iTunes is installed and running.  I’ve got to sign in to my (existing) iTunes account, requiring me to agree to still more terms and conditions.  Now I can finally connect the sync cable.

Hooray!  The iPad wakes up, I can run the web browser (which nicely prompts me to connect to my home network, pretty easily) and I’m off.  So now I’m off to the AppStore on the iPad to get the Kindle and Netflix apps.  But first I’m prompted to download the iBooks application, which I agree to.  It starts downloading, I search for Kindle and get it started, then search for Netflix.

At which point I’m prompted to agree to new terms and conditions for the iTunes store.  I’m now on panel 1 of 58 of those new terms and conditions.

Oh, well.  Eventually I’ll probably be able to use this new gizmo.  I just had to drop a note here about the tremendous failure Apple has here.  This is really ridiculous; nobody but Apple could get away with it.

March 29, 2010

Skype Minimize Annoyance

Filed under: Uncategorized — Charles Engelke @ 1:19 pm

Skype just installed an update today that changed its behavior on Windows 7.  You can’t close the Skype window without completely shutting Skype down.  So you end up with the Skype icon in the system tray (where it belongs, so far as I’m concerned) and another icon in your Windows 7 task bar (where it has no business being when I’m not actively using it).

This isn’t a big deal.  Except it’s been severely annoying me all day long.  I tried fiddling with all sorts of Skype options and searching for Skype help, all to no avail.  Finally I used what I should have tried first: Google.  And near the top of the first results page there was a blog post from My Digital Life that fixed it immediately.  In a nutshell, just run Skype in compatibility mode for an earlier version of Windows (I used Vista SP 2).  And it will start working the way it always has, and that I think it always should.

February 28, 2010

One Year with Kindle

Filed under: Uncategorized — Charles Engelke @ 12:51 pm

I got my first Kindle a year ago this week.  It arrived on February 24, 2009.  Here are some statistics about my book-buying (and book-reading) habits in that year.

I “bought” 176 Kindle books at a cost of $873.43 during that year.  I put “bought” in quotes, because a lot of the books, especially at first, were free or nearly so.  62 of those books cost $0.00 each.  Another 9 cost $1.00 or less.  I read only five of the free books, and four of the under $1.00 books.  Another six of of the free books, and the other five under $1.00 books, were books I’d read in the past and figured, hey, maybe I’ll want to read them again someday, and this is a good deal.

From here on out I’ll leave out the free books I was never seriously interested in.  I’ll also leave out the 6 non-free books my wife bought for her Kindle that I didn’t read.  (She’s read books I’ve bought and vice-versa, but for this post I’ll just treat everything I bought or read by itself.)  That leaves 112 books bought for $817.01, for an average price of $7.29.  The Kindle itself cost $359.00; amortized over just the first year, it added $3.21 to the cost of each book.  Of course the Kindle’s still going strong, and new ones are cheaper, so the actual cost of the hardware per book should be considered to be much lower.

The most common price I paid for a Kindle book was the famous $9.99 that’s always talked about.  41 books cost me that much; slightly more than a third of my purchases.  I paid more than that for 9 books: one was $15.83, three were $14.27, and the others ranged from $11.20 to $13.73.  Most of those books are now $9.99 or less (after all, many are now out in paperback), but two have gone up about $0.50 each.  The majority of books I bought were less than $9.99; they averaged $6.13.  Most of the Kindle books (80% or more) were fiction.

I’ve been pretty price-sensitive for the Kindle books.  I’ve notice that a lot of the prices over $9.99 fluctuate from day to day, so I’d watch and buy them only when they dropped enough for me.  It’s not that I’m not willing to pay more for a Kindle book, but I’m not willing to pay more than I think they should cost.  Which, despite what publishers and many authors insist, is significantly less than a paper book.  I’d say about a third less than a hardback.  (And that’s a third less than the price I actually pay for a hardback, not the list price.)  Say about 40% of the list price of a hardback.  I could go up to about 50% of list if they’d get rid of DRM and offer books in formats I could use on any device.  Why not more?  Well, I generally pay 65% of list for a physical book, and not having to print, handle, and ship the book should result in savings passed on to me.

What about other books I bought during that year?  Not counting gifts or books for work, I seem to have bought approximately:

  • 3 novels in Kindle compatible format for a total of $14.99, an average of $5.00 per book.  I wanted to buy all three from Amazon, but the publishers didn’t offer them that way.  One was bought directly from Baen books for $6.00 (I would gladly have paid $9.99), one from Fictionwise for $8.99, and one downloaded from Cory Doctorow’s personal site for free (again, I would have gladly paid).
  • 21 physical books from Amazon for $325.06, average price $15.48.  Some of these were special editions from small presses (Subterranean Press and University of Chicago Press) not otherwise available, one (Donald Westlake’s last novel) was sentimental, a couple were remainders cheaper than the Kindle editions, two just weren’t available in Kindle, and the others (travel and comics) had a lot of images not well suited to the Kindle.
  • 3 physical books bought from overseas because they weren’t yet published in the United States.  It’s a pain to look up their prices, bit I recall them being about $30 each, when shipping was included.
  • 14 technical books from Manning and Pragmatic Programmers, all in electronic format, for $332.86, an average of $23.78.  About half these books were both on paper and electronic, the rest only electronic.
  • A handful of paperbacks bought on impulse while traveling.

So, putting it all together, I bought around 158 books and spent about $1600, or $10 per book.  If you add in the cost of the Kindle itself, I spent about $2000 on books during the year.  And contrary to what some people think, I completed more of the books I bought on the Kindle (at least 95% of them) than the ones on paper, even if you only count fiction.

I have become very reluctant to read a novel in paper form, especially hardback.  They’re too heavy while reading and inconvenient to carry on trips.  If a book isn’t available on the Kindle, I’m very unlikely to buy it now.  Minotaur Books, you should take note.  You’ve lost a few sales to me I would have bought in hardback a year ago because you won’t offer Kindle editions.

January 31, 2010

Macmillan vs. Amazon (vs. Authors and Readers)

Filed under: Uncategorized — Charles Engelke @ 1:55 pm

Amazon.com and Macmillan publishing are playing hardball with each other.  As of right now, Amazon isn’t selling any of Macmillan’s books in any format.  This is very bad for writers and readers, and has drawn a lot of attention.  There have been a several good posts at John Scalzi’s blog, and a great one at Charlie Stross’s blog.  The New York Times Bits blog and Wall Street Journal have also covered this, but not with much clarity or depth.  Macmillan has publicly commented, but so far Amazon has not.

Most of the articles and comments (other than Charlie Stross’s post) say that this about e-b00k prices.  That doesn’t seem to be quite correct.  True, Amazon wants e-books to sell for about $10 and Macmillan wants them to be about $15, but that’s not what has triggered this extreme tactic.  It seems that Macmillan has offered Amazon a choice: sell our e-books as an agent instead of a retailer, or else wait 7 months after hardcover release to sell them at all.  It seems implicit that Macmillan will not allow Amazon to continue paying hardcover wholesale to retail the e-books right away (which is what they do now).

Those are both terrible options for Amazon. If they’re just one of many agents, all selling the exact same product for the exact same price, how do they beat their competition? Smoother payment and delivery systems help, but they’re not enough. And if e-books are delayed for 7 months, they’ll just plain fail as a product.

Amazon’s retaliatory move is extreme and is hurting a lot of bystanders. I don’t support it, but I do support their position that Macmillan’s offer (demand?) is unacceptable. Publisher-fixed pricing or 7 month delays would be bad for Amazon, bad for customers, and (I believe) bad for authors. It might even be bad for the publishers; businesses aren’t always great judges of how future market changes will affect them.

I think Amazon understands the e-book marketplace much better than Macmillan.  I even think they understand the traditional book marketplace better.  Let the market sort this out to see who is right.  Macmillan wants to take the market partly out of the equation: no retailers involved in pricing at all.  There are countries that work that way for physical books, and I think it works poorly there.  It protects incumbent publishers and stifles innovation.  Macmillan’s just wrong here.

Update: Macmillan has now directly posted a statement on Tor.com.  Among many other issues, they state that they are trying to encourage a business model that “encourages healthy competition” and is “is stable and rational”.  Later, it states that the disagreement is about “the long-term viability and stability of the digital book market”.  [Emphasis mine.]

A market with healthy competition is not stable.  A stable market doesn’t have healthy competition.  Macmillan is an incumbent and wants to stay that way, comfortably.  Amazon’s something of an incumbent itself, but it seeks change, not stability, because it has confidence that it can win in a changing world.  I favor Amazon’s point of view.

January 22, 2010

Firefox 3.6 Tabs

Filed under: Uncategorized — Charles Engelke @ 10:39 am

Firefox 3.6 puts newly opened tabs just to to the right of the current tab, instead of all the way to the right of all open tabs.  I can’t really think of why it matters one way or the other.

Except, it really bugs me.  For no good reason.

It’s not at all clear what you can do about this.  I found out, though, and I’m noting it here.  In the address bar, enter about:config, and click to accept any warnings you see.  This gives you a list of very specific options you can adjust to change your browser’s behavior.  Scroll down to browser.tabs.insertRelatedAfterCurrent and change it from true to false.  Just double-clicking on the option’s name will make the change for you.

The warnings for this configuration page are a bit overblown.  Every option that has been changed from the default is shown in bold, so if you make changes and don’t like how things came out, you could always go back and double-click on every bold option until they’ve all gone back to their defaults.

December 16, 2009

Hooray for FedEx! (Boo for me.)

Filed under: Uncategorized — Charles Engelke @ 10:09 pm

I was being so efficient.  I had to ship a defective monitor back to the manufacturer and get a suit altered, and do them both today to meet deadlines.  Last night, I folded the suit on top of the monitor box so I wouldn’t forget either of them.  This morning I left the monitor at our front desk where FedEx picked it up, and after work I went to the tailor.  Where I tried on the suit.

Or tried to try it on.  I had the jacket, but not the pants.

Where were the pants?  I finally figured it out.  I must have packed them with the monitor and shipped them off to Philips electronics.

Yes, I am an idiot.

But FedEx saved the day.  I called their 800 number, and they got on the phone with the local depot that was processing the box at the start of its journey.  They found the box, pulled it into their office, and waited for me to come by, open it, and get my pants.  Then they resealed the box and sent it on its way.

They were incredibly helpful and cheerful, and acted like this was no big deal.  But they went way out of their way for me when they didn’t have to.  I’m really impressed, and I’m not going to forget.  FedEx is getting my shipping business, such as it is, from now on.

October 20, 2009

curl Cheat Sheet

Filed under: Uncategorized — Charles Engelke @ 9:36 am

A lot of what I do lately involves creating or consuming RESTful web services.  (By the way, Leonard Richardson‘s and Sam Ruby‘s book by that name is fantastic.)  One really nice thing about RESTful services is that you can do many of the operations with a simple web browser.  But not all: web browsers are happy to do GET requests and, with a little effort, POST requests, but none of the other HTTP verbs.  If you’re trying to do anything significant with a RESTful web service, sooner or later you’re going to need to perform PUT, DELETE, and HEAD requests, too.

That’s where curl comes in.  It’s an open source command line tool that can perform just about any kind of HTTP request operation.  It certainly can do everything I ever need to try in testing a RESTful web service.  But every time I use it I have to look at the help page for it, which is incredibly long, because I can’t remember all the command line switches I need.  So I’m writing up a cheat sheet here, with the switches I need to use.

The basic form of a curl command is curl -X verb [options] uri, as in:

   curl -X GET -D headers.txt http://example.com/some/path

The command above will perform an HTTP GET request for the URI http://example.com/some/path.  It will store the response headers in the file headers.txt (that’s what the -D headers.txt option makes happen) and will send the actual response body to standard output.  You can redirect standard output to a file if you want to save the response, or if it’s not simple text (for example, if you want to GET a photograph).

The HTTP verbs that I use with curl are GET, PUT, POST, DELETE, and HEAD.  The command options that I use most often are:

-D filename
Save the response headers in the file filename.
-i
Send the response headers to standard output, along with the response body.
–basic –user username:password
(There are two hyphens each before basic and before user.)  Authenticate the request with HTTP Basic authentication, with the specified username and password.
-H “header: value
Set a request header named header to the specified value. Note the double quotes, which causes your command shell to pass the entire header specification to curl as a single string.  You can specify multiple -H options to set multiple request headers.
-T filename
Send the contents of the specified file as the body of a PUT request. Useful for uploading files.
-k
Ignore SSL certificate problems. Very useful when talking to a development server with a self-signed certificate. Otherwise curl will refuse to connect to it.
-h
For help. Show all the possible command line options.

There are a bunch of other switches, but these are the ones I use all the time. You may need some others. For example, if you use web services that expect form data via HTTP POST, you’ll want to learn about the -F option.

October 9, 2009

Great ThinkPad Service, Lousy UPS Delivery

Filed under: Uncategorized — Charles Engelke @ 3:08 pm

Cory Doctorow put up a glowing post about Lenovo’s ThinkPad warranty service yesterday.  I’m not surprised.  His experience is similar to the ones I’ve had with ThinkPad service over many years.  And, as it happened, I was using ThinkPad warranty service when I saw his post.

Unlike Cory, I didn’t pay for on-site service, just regular “depot” service.  But they still did great.  My ThinkPad just went dead Tuesday evening.  In more than 15 years of using them, I’ve never had a similar problem before.  But I phoned them up to get it fixed.  They actually listened to my description of the diagnostics I’d already performed, believed that I was describing them accurately, then spent about three more minutes having me answer a couple of questions and trying one more thing.  They diagnosed the problem as a bad system board needing replacement, and said they’d send me a box to ship the unit in.

This was Tuesday evening.  Wednesday morning, the empty box arrived.  I put my ThinkPad (without the hard drive) in it and called UPS to get it.  They picked it up that afternoon.  Thursday morning the Lenovo website said my unit was being repaired, and Thursday afternoon, that it had been fixed and shipped back to me.  Friday morning I waited for our receptionist to tell me that UPS had delivered it.

That service turnaround speed is fantastic!  Or, it would have been, if my PC had ever shown up.  I called the ThinkPad support line and they tracked the UPS package and found that an “exception” had occurred.  I looked it up myself.  Here’s what it said:

THE RECEIVER IS ON VACATION. DELIVERY WILL BE ATTEMPTED WHEN THE RECEIVER RETURNS

I’m on vacation?  Shouldn’t I know about that?  Why am I in the office, then?

I checked with our receptionist, who said that UPS had shown up early today but didn’t have any packages to drop off.  UPS and Lenovo agreed that the address label had the right address on it, except that the zip code was off by one (which is still in the same town).  UPS said they’d already fixed that zip code.  So what happened that led to that exception?

What didn’t happen was what they claimed in the tracking log: that they tried to deliver the package to the correct address only to be told that I was on vacation!  My guess is that the driver may have put the package in the wrong part of the truck (due to the zip code) and discovered it after he’d already visited our business.  And then put that stuff about a “vacation” down as an excuse for not having delivered it on time.  This isn’t the first time I’ve had UPS miss a delivery date and put down some completely bogus reason in the tracking log.  It may be the driver doing it, it may be someone in a back office.  But there definitely are UPS staffers who make up false reasons when they don’t make a delivery they should.

UPS refused to make it right by getting the package to me today like they should.  Instead, I can wait until Monday and go without my PC a few more days, or drive over a hundred blocks to their facility in the boondocks to get it myself.  I’ll probably do that, but I’m not happy about the wasted time and money on my part.  UPS is doing nothing at all to make it right.

As I said above, this kind of thing has happened to me before with UPS, maybe about one time out of every twenty or thirty deliveries.  That’s not a big percentage, but it’s a lot higher failure rate than I’ve had with FedEx.  I’ll never use UPS when I have a choice, and I’ve already made a complaint with Lenovo suggesting that they should reconsider using UPS, too.

September 24, 2009

More Code Signing

Filed under: Uncategorized — Charles Engelke @ 2:57 pm

An update to my last post: you can use signtool with a certificate in the Windows certificate store; it doesn’t have to be in a file.  In the command line, instead of specifying a file to use with the /f option, specify part of the certificate’s subject name with the /n option, as in:

signtool sign /n “Part of subject name” /p newpassword hidden.exe

You can leave off the /p (for password) option if you don’t have the certificate protected by a password.

The advantage of this is that some trusted corporate administrator can install the certificate on each developer’s PC, marked as “not exportable”.  Then the developers can use the certificate to sign code on that PC, but can’t take a copy of the certificate elsewhere with them.  It’s not a perfect solution, but it seems a good compromise.

September 3, 2009

Click-once, administrative access, and code signing

Filed under: Uncategorized — Charles Engelke @ 2:05 pm

We need our users to run some legacy software on their own computers in order to work with a service of ours. They do this with one small function of a large software package. But we want to update the behavior of that function pretty frequently, without forcing the users to keep downloading and installing updates for the whole package. Enter Microsoft’s ClickOnce technology.  It’s a very nice way to give users a near-web experience but with near-native software capability.

ClickOnce programs are launched from a link on a web page, self install (and can self update), but run in the .Net environment under Windows.  Unlike web pages they can access Windows APIs to do whatever you really need.  Which is a potentially big security problem.  Just clicking on a link could really open your computer to anything, so Windows restricts what ClickOnce programs can do, and makes sure that users approve anything at all risky.

Some of the user approval requests can be very scary; we want to configure things to make those notices as clear and unalarming as possible.  Signing code helps placate Windows.  The alert window borders change from angry orange to soothing baby blue, the wording does less warning and more asking whether to allow something, and Windows says that it knows who the code comes from, instead of just an unknown or unverified publisher.  The concept of code signing isn’t too hard to follow, but I found the mechanics difficult, in part because the consequences of different scenarios aren’t well described; I had to try them all and see what happened.  This post describes what eventually worked, with a few asides about what didn’t.

Application Structure

The user starts with a web page that does most of the work that’s needed.  But there is a step that must be performed on his or her own PC, for security reasons.  So the web page contains a link to a ClickOnce application to do that work.  The ClickOnce application needs to run a legacy, console mode native Windows application to do the work.  ClickOnce applications are not allowed to do that under any circumstances, so far as I can tell, so the ClickOnce application launches a hidden .Net application that then launches the legacy native application.

It all looks kind of like this:

Application Chain

Issues

When the user clicks the link in the web page, he or she may get a security alert before the program is downloaded, installed and run.  It depends on the exact version of Windows, user privilege level, security policies, and probably the phase of the moon.  In any case, if the alert is shown we want it to be as unthreatening as Windows will allow.

The hidden .Net application has to be built with a manifest specifying that it has to run with Administrative rights in order to launch the native application.  When it starts, Windows might display a really scary alert because of that.  Again, we want to make this as benign as Windows will let it be.

Curiously (to me) starting the native application doesn’t trigger any alerts, even though that’s got the most potential to cause trouble.  I guess it’s because the hidden .Net application already got permission to escalate to the highest possible privilege level. After that, Windows must figure that anything goes.

The ClickOnce Application

When it’s just built and deployed normally, the first time the user runs it he or she sees something like this:

ClickOnceUnsigned

In Visual Studio 2008 (even the Express editions) you can right click on the project name and select Properties to get a tabbed page about the project.  The Signing tab lets you “Sign the ClickOnce manifests“.  You’ll need a certificate, but Visual Studio will offer to create a test certificate for you to use.  However, it has no effect on this warning.  Even though the code is now signed, the certificate wasn’t issued by a valid certificate authority, so Windows doesn’t much care.

Now, if you get a paid code signing certificate, things are a bit different.  We got one from Thawte.  In the Signing tab I just clicked the check box for “Sign the ClickOnce manifests” but then, instead of clicking the button labeled Create Test Certificate… I clicked on Select from Store… and picked our paid certificate.  When I run the published application from a web page after that, there’s no warning at all.  That’s what I want.  I’m not as sure as Microsoft that just having a valid certificate is enough to make me trustworthy, but combining that with the execution restrictions placed on ClickOnce applications makes this a pretty safe operation for the user.

The Hidden Application

This is a regular .Net program, not a ClickOnce application.  It will be included as a delivered resource in the ClickOnce application, and that application will launch it from the ClickOnce private storage area.  Launching any application this way might trigger warnings, but this application definitely does.  That’s because the app.manifest for this program has the entry:

<requestedExecutionLevel level=”requireAdministrator” uiAccess=”false” />

When the ClickOnce application launches the hidden application Windows makes sure that the user knows about it.  I’d love to show you an image of the alert but in Windows 7 all other software, including the Print Screen key, is disabled while the warning is up.  The alert has an angry orange heading that says “Do you want to allow the following program from an unknown publisher make changes to this computer?”  The default answer is No, and there’s a Help me decide link to more information that doesn’t provide reassurance.  Well, it shouldn’t be reassuring.  The program has enough privilege to cause big trouble if it wants to.  But our users have been downloading, installing, and running similar programs from us for years, so they’ve already decided to trust us.  Our problem is to keep this alert from scaring them away.

Code signing helps.  Once the hidden program has been signed, there’s still an alert, but it’s not nearly as frightening.  The alert has a baby blue background and says “Do you want to allow the following program to make changes to this computer?”  It lists the program name, and our company name as the Verified Publisher.  If the user clicks Show Details, he or she can examine the certificate itself.  And the Help me decide link displays text that’s more reassuring:

If the program has a verified publisher, it means that the program has a valid digital signature. A digital signature helps ensure that the program is what it claims to be and comes from a reputable publisher. If the program has an unknown publisher, it doesn’t have a valid digital signature from its publisher. This doesn’t necessarily mean the program is harmful—many older, legitimate programs lack signatures.

It not only strongly implies that we are a “reputable” publisher, it also gives the user the idea that legitimate programs lack signatures only if they are “older”.  Since our program is brand new (to the user), that would be taken as another red flag if it weren’t signed.

The Native Application

This isn’t an issue.  When the hidden program launches it, Windows doesn’t show any alerts.  It just works without any special code signing.

How to Sign

As mentioned above, signing the ClickOnce application is very easy.  Get your certificate from a known Windows certificate authority, and install it in your Windows certificate store.  The procedures for doing those first two steps vary from provider to provider.  But once it’s in the local certificate store, you can sign your code automatically with every build by doing the following steps once:

  1. Right-click on the project name in the Solution Explorer pane, and select Properties from the context menu.  (This is not at all the same properties that show up as a child of the project in solution explorer.)
  2. Click on the Signing tab.
  3. Check the box Sign the ClickOnce manifests.
  4. Click the Select from Store… button, find your purchased certificate in the list, select it, and click OK.
  5. (Optional) Fill in the Timestamp server URL with a value provided by your certificate vendor.

I have to take it on faith that step 5 above will do anything.  The problem it’s designed to solve is what happens when someone runs your ClickOnce application after your certificate expires.  So long as the application was built and signed before the expiration, the signature should still be considered valid and Windows should remain happy to run the program.  But how can it know that it was signed then?  A trusted timestamp server can be used to sign your code with an added timestamp.  That way, Windows can know that the signature was applied while the certificate was still valid.  The only way I can see for myself that using a timestamp server makes a difference is to create one build with it and one without, and then wait a year for my certificate to expire.  Well, I could play with my system clock, but for all I know, Windows checks that value over the network, too.

Signing the hidden .Net application is a little harder.  The Signing pane in Visual Studio says it’s just for ClickOnce.  Depending on what you do here, you might prevent Visual Studio from building the program at all (because it will think it’s a ClickOnce application, and they aren’t allowed to request administrative privilege), or it might just have no visible effect at all.  There’s a check box to Sign the assembly instead, which seems like it’s what I want, but in my tests it either didn’t work (complaining that the key file I chose was already installed, so I couldn’t use it) or had no visible effect.  So I used the command line signtool that’s available as part of the Windows Platform SDK for Windows 7 (or any of the ones for other recent versions of Windows).

To sign a program with signtool you need your certificate and private key in a pfx file, not the certificate store.  If you specified that the key should be exportable when it was installed in Windows then you can export it from the store directly to the needed file.  Otherwise, you’ll have to follow instructions from your certificate provider.  For example, Thawte delivered the certificate I was using as two files: mycert.spc and privatekey.pvk.  I had to create a pfx from those two files with the command pvk2pfx:

pvk2pfx -pvk privatekey.pvk -pi oldpassword -spc mycert.spc -pfx signcert.pfx -po newpassword -f

In this command, oldpassword is whatever password the certificate issuer places on the pvk file, and newpassword is anything I want to use to protect the new pfx file.

Regardless of where the pfx file comes from, once you have it you can use signtool:

signtool sign /f signcert.pfx /p newpassword hidden.exe

If you want a timestamp, signtool can do that, too:

signtool timestamp /t http://somebigca.com/some/path hidden.exe

You can automate this in Visual Studio:

  1. Right click on the hidden application’s project in Solution Explorer and select Properties from the context menu.
  2. Click on the Build Events tab.
  3. Put the signtool commands in the Post-build event command line box.  You can refer to the executable using the macro “($TargetFileName)”.
  4. (Optional) Add a copy command to the Post-build event command line box after the signtool statements to copy the built executable to the ClickOnce project’s folder.  If you use Solution Explorer to Add this file to the deployment package for the ClickOnce application it will be delivered every time a user updates the ClickOnce application.

Loose Ends

Managing the code signing certificates in a multiple developer environment is going to be tricky.  You don’t want a developer who leaves the company to be able to take a usable copy of the certificate with him or her.  The best way I can think of to manage this is to have a highly trusted person install the certificate on each developer’s PC, marking it as not exportable.  Then anyone who logs in to that PC can use the certificate on the PC, but (supposedly) can’t take a copy to a different machine.

That approach won’t work for the signtool examples used for the hidden application above, because they expect the certificate to be in a file, which anybody could copy.  It appears that signtool can directly use a certificate from the Windows certificate store instead of a file, which would reduce or eliminate the chance of a developer copying it.  I may give that a try in a day or two, after I clear my head of this project by working on something else for a while.

August 28, 2009

Poetry

Filed under: Uncategorized — Charles Engelke @ 6:42 am

E-mail from my wife on her drive from Macon to Gainesville:

no service

my phone can’t get a signal it is pouring i am pissed

Hmmm… maybe it would work better this way:

no service

my phone can’t get a signal
it is pouring
i am pissed

Yep, that’s it. She sent it from her Kindle, using Google Mail for domains, which I think was extremely resourceful.

August 25, 2009

QRCode of Code to Generate a QRCode

Filed under: Uncategorized — Charles Engelke @ 1:45 pm

My last post ended with a little bit of Perl to generate a QRCode barcode. Just for fun, I built a barcode with that Perl program as its source. If you’ve got a cell phone with a barcode scanner, you can probably get it to decode the following image back to that sample program:
perlcode

QR Codes and Perl

Filed under: Uncategorized — Charles Engelke @ 1:28 pm

Two-dimensional barcodes like QRCode are pretty neat, and after I got my Google Ion phone with its built-in barcode program I went around scanning every one I could find for a while.  But I never really did anything with them until yesterday, when it occurred to me that they might be useful for a project I’m doing.  The project would augment an existing system written in Perl, so I went to see what Perl has in this vein.

It has GD::Barcode::QRCode (which I got for Windows using ActiveState‘s PPM tool and installing Catalyst-View-GD-Barcode-QRcode).  It’s dead easy to use.  Given a string (say, $text), you can just do:

use GD::Barcode::QRcode;
binmode STDOUT;
print GD::Barcode::QRcode->new($text)->plot->png;

Well, it’s that easy if the text is very short. If it’s not, you’ll get a message to STDERR saying something like:

Overflow error. version 1
total bits: 468  max bits: 128

You, see, there are different size QRCodes, and the module doesn’t seem to pick the right one for you (even though the documentation seems to say it will). But there are three parameters you can pass to the new method that give you some control over this: Ecc, Version, and ModuleSize. But what do they mean? The documentation isn’t very clear, but I’ve figured a bit of it out. The purpose of this post is for me to document what I’ve learned so I’ll be able to use it again.

The most important parameter (to me, anyway) is Version. Which is a terrible name, because it implies to me sequential enhancements. Actually, it just determines the size of the square barcode. A version 1 barcode is 21 rows by 21 columns, and each successively larger size is four rows taller and four columns wider than the last. So a version 2 barcode is 25 by 25, version 3 is 29 by 29, and so on. (I don’t know why they go up by four. It might just be convention, or it might be something to do with the algorithms involved.) A version n barcode is therefore 4n+17 rows by the same number of columns. The largest legal value of version is apparently 40.

Ecc stands for Error-Correcting Code, and the possible values are ‘M’, ‘L’, ‘H’, or ‘Q’, with ‘M’ being the default. That’s all the module’s documentation says. Huh? How am I supposed to pick? Well, different values result in different levels of error-correcting codes being used. The more robust the error-correction, the less data you fit into a fixed size barcode. And the values used here? Since ‘L’ reduces available data less than ‘M’, which does so less than ‘H’, my guess is that they mean “low”, “medium”, and “high”. ‘Q’ is between ‘M’ and ‘H’; maybe it means “quality”? If you don’t have to worry about the barcode being badly smudged or losing definition in transmission, ‘L’ for low error correction is fine. If you’re going to have it get dirty, scraped up, and badly scanned, go for ‘H’. Or pick something in the middle.

Finally, ModuleSize is just how big the printed blocks are going to be. If you pick 1 for this value, they’ll use a single pixel each in the resulting image file. I found that 8 was a good choice for a PNG on a web page.

The final piece of the puzzle is just how much data you can put into a barcode with specific values of these parameters. This table tells you. So, if I have 256 bytes of binary data, and I have a fairly clean but not pristine printing, scanning, and transmission environment (so I’ll want to at least use M for medium error correction), I’ll need to set the version to at least 12. Here’s the code:

use GD::Barcode::QRcode;
binmode STDOUT;
my $code = GD::Barcode::QRcode->new($binary_data_256_bytes_long,
   {ECC => 'M', Version => 12, ModuleSize => 8}
);
print $code->plot->png;

May 26, 2009

An “Incident” on the Highway

Filed under: Uncategorized — Charles Engelke @ 9:22 am

Laurie was driving, and I was working on my laptop.  We were on our way down to Sanibel Island in south Florida when suddenly there was a loud bang and a bright flash, and Laurie quickly pulled over to the side of I-75.  We’d hit (actually, we think we’d been hit by) this:

Jack Shaft

Jack Shaft

That’s apparently a jack shaft from a tractor-trailer; both Laurie (former Army transportation officer) and the Highway Patrolman agreed on that.  This piece is less than half of what hit us; we broke it, and the other piece is even larger and heavier.  This part is about two feet long and about 25 pounds.

It shredded a tire, cracked all the way through the wheel rim, and because of how it hit the car, made the two passenger side airbags deploy.  It similarly disabled four other cars.  The truck it fell off of apparently kept on going.

Being hit by airbags is no fun.  Four days later my hearing’s still a bit off; the first day I could hardly hear at all.  I’ve got a bruise on my arm where it hit me, and I was really shaken up the rest of the day.  Worst of all – we spent all most all Saturday of our long weekend in Sanibel at the car dealer instead of the beach, and we’ve got to run around to insurance adjusters, dealers, and an audiologist today.  And I’ve got to catch a flight this afternoon.

« Previous PageNext Page »

Blog at WordPress.com.