Charles Engelke's Blog

July 23, 2010

Fantastic Friday at OSCON

Filed under: Uncategorized — Charles Engelke @ 3:43 pm

Well, they saved the best for last.  In general I found this year’s OSCON to be pretty weak in content, but today has been great.  In particular, Simon Wardley’s keynote was excellent (and clearly much longer than it was expected to be, but well worth the time), and Tim Bray’s talk on Practical Concurrency was the best of the conference.

We will close out with what is sure to be an entertaining talk on the world’s worst inventions from Paul Fenwick, and then be on our own for the rest of the day until our red-eye flight home late tonight.


July 22, 2010

Node.js at OSCON

Filed under: Uncategorized — Charles Engelke @ 8:12 pm

Tom Hughes-Croucher is going to tell us about Node.js, a JavaScript web server.  He starts by offering a doughnut to anyone asking non-awful questions.

Why server-side JavaScript?  Well, first there are a lot of JavaScript programmers.  Pretty much all web programmers use it, because that’s all they have available on the client.  So why not use it on the server, too?  And why write things twice, separately for the server and client sides?  And progressive enhancement is free (close enough to free).

JavaScript runtimes include V8 (Google) in C++, Spider Monkey (Mozilla) in C++, Rhino (Mozilla) in Java, JavaScript Core (Apple) in C++.  V8 is significantly faster than Spider Monkey (at the moment), but Mozilla is coming back with Trace Monkey.  Google’s success with V8 has sparked a speed war among JavaScript and browser builders.

Node.js is a server-side JavaScript process that uses V8.  It runs on anything POSIX-enough. (May be okay on Cygwin on Windows.)  It’s non-blocking and event driven.  It uses the CommonJS module format (we’ll find out soon what that means).  Node is very fast.  It’s almost as fast as nginx, which is all native C and highly optimized.

Here’s some code (I think I got it down right):

var http = require('http');
http.createServer(function (req, res) {
   res.writeHead(200, {'Content-Type': 'text/plain'});
   res.end('Hello, World\n');
}).listen(8124, '');
console.log('Server started.\n');

There are plenty of packages available for Node.js, which can be installed with NPM, the Node Package Manager.  Which is itself written in JavaScript.

He shows more examples, and explains how to use things.  A very good session.

JavaScript at OSCON

Filed under: Uncategorized — Charles Engelke @ 3:07 pm

I’m starting day 2 here with session on JavaScript.  First up is Programming Web Sockets by Sean Sullivan, to be followed by a talk on jQuery UI by Mike Hostetler and Jonathan Sharp.

Web sockets are a lightweight way for web servers and clients to communicate instead of using full HTTP.  Think of it as a push technology.  We start with an example of a multi-player game.  There are two specs to learn: the API and the protocol.  As a programmer, we care more about the API, which is how we use the facility.  He gives it all on one slide, which I don’t have time to show here.  Basically, instantiate a new WebSocket object, set some handlers for various events on it (like receiving and handling data: onopen, onmessage, onerror, onclose), and put data into it with a send method.  Eventually, call the close method to stop using the web socket.  It does look quite simple.

But how do I program the server-side?  That’s more fluid right now; the protocol specification is changing in incompatible ways.  On the browser side, we have support in Chrome and later, Safari 5.0, Firefox 4.0.  IE 9?  Still not known.  Apparently (per a tweet from yesterday) Apple used to support web sockets in iOS, but now no longer does.  On the server side, there’s an enhancement request in for Apache httpd.  There’s a Python extension called pywebsocket available, though.  Django supports web sockets, and maybe some Ruby stuff, too.  Jetty has it.

No actual coding examples, which is a disappointment to me.  We’re finishing early, and have a long gap until the jQuery UI talk (which I think may be pretty full).

It is pretty full, but not standing-room only.

We start with effects, which are pretty ways to show changes on elements.  There’s not much meat here.  Now we move on to interactions, which are more functional.  For example, make an element draggable and attach handlers to various events related to that.  There’s also making a list sortable.

This is all great stuff, but I think I see why I have a hard time getting into it.  There isn’t a grand core idea here, but instead an enormous number of small, focused helper tools.  So a talk like this touches on one and then moves on right away.  Forty minutes of that is hard to stay focused on.  But now we’re getting a more complete coding example.  I don’t know; I like the functionality and appearance, but the necessary coding seems very complex for the examples.

It’s convinced me to use it, anyway.

Android Hands-on at OSCON

Filed under: Uncategorized — Charles Engelke @ 12:37 am
Tags: ,

Well, I really didn’t think Google would do it (again) but they handed out free phones to everyone attending this three hour evening session.  We all got a Nexus One, on AT&T frequencies by default, but you could ask for a T-Mobile one.  It only matters for 3G data; 2G data and voice are the same on both.

So now I’ve received four free phones from Google over the last 14 months.  I’m using one as a phone, two others for development and sharing with co-workers, and now I have to figure out whether I’m going to migrate to this fourth one, or pass it on to a co-worker, and use it just for development.  I’m sure not complaining, though.

I took an Android development class at Google IO in May, but it only last an hour and really just gave us time to take an existing app and uncomment code bit by bit to see what it did.  I’m hoping to get a bit deeper tonight.

We’re starting with design principles, not jumping right into programming.  Mobile apps are a bit different from desktop ones, and Google says that they want people to create great ones.  Though I don’t know how much attention folks are paying to the speaker; they’re unboxing and setting up their phones.

Some good UI points: don’t just port your UI from other platforms; your app should behave like other apps on the same platform.  Don’t do positioning and sizing that won’t adjust to different devices.  Don’t overuse modal dialogs.  Of course, DO support different resolutions, handle orientation changes, support non-touch navigation, and make large, obvious tap targets.

[Yes, I swear I’m paying attention.  But I’m also updating my new phone to FroYo!]

Design philosophy: choose clear over “simple”.  Focus on content, not chrome.  Enhance the app through use of the cloud (Yes!).

Show feedback: have at least four states for all interactive UI elements: default, disabled, focused, pressed.

Avoid confirmation dialogs (are you sure?).  Instead, support undo.

(By the way, we’re back on the OSCON_BG wi-fi tonight, and it’s performing very badly again.  Of course, that might be due to everyone downloading FroYo to their phones via wi-fi, since they didn’t come with SIMs.)

Some new UI design patterns: Dashboard, Action Bar, and Quick Actions.  They’ll show them in the context of the new Android twitter app.

This is all good content, but I’m ready for the “hands-on” part.  Let’s build something.

Tim Bray just announced that Android is now shipping 160,000 phones a day, which comes to more than 50 million a year.  So developers should be interested in creating apps for it, and tools to create apps for it.  (Nobody’s mentioned AppInventor yet, though.)

Now we’re going to talk about how to have your app interact with RESTful services.  I missed that session at Google IO (too much competing content there) and I care deeply about this, so I’m really glad to see this.  First issue: why a native REST Client versus a Mobile Web App?  Basically because web apps can’t do all the things a native app can do.  Yet.  Google’s working on it making web apps more and more capable, but there are still things you can’t do on your phone with a web app that you could do with a native app.

First up for REST: how not to do it.  Start up a new thread to talk to the remote server, save results in memory, and then have your main thread take that data out of memory and use it.  Why is this bad?  Well, the OS can shut down the process any time the user navigates away from it (since the device has limited memory).  So if you haven’t finished the fetch, or processed the data fetched to memory, it’s gone and you’ll have to do it over.  Instead, you need to start a service for this, which the OS won’t kill as easily, and if it does kill it, the OS will save and later restore the state.

Now we’ll see step by step how to do that.  There’s a diagram showing with 11 steps involved to have your activity create and use a service to perform a REST method.  The specific example is getting a list of photos in an album, or saving a new photo to an album, accessed via a RESTful web service.  Some other tips:

  • Can use the new built-in JSON parser in FroYo
  • Always enable gzip encoding.  If the server supports it, it will not only download faster, but use the radio (and hence the battery) less.
  • Run the method in a worker thread
  • Use the Apache HTTP client library

The whole thing starts when the Service receives the Intent sent by the Service Helper and starts the correct REST method.  Then the Processor that actually makes the request is done it triggers a callback, and then the Service Helper invoices the “Service Helper binder callback”.  (I think I know what they mean by that.)  It can queue multiple downloads.

That was how the Service responds to the Service Helper.  Where does that come in?  It’s what your Activity actually invokes.  It’s a singleton that exposes a simple (asynchronous) API to be used by the user interface.

(Again, good information.  But where’s the hands-on?  I’ve already created the Hello, Android example and pushed it to the new phone, but I want to create something tonight!)

I’ve been to a couple of short session on programming Android, and created the baby Hello, World app, but I’m barely keeping up at this point.  I strongly suspect that this example is way too complicated for the majority of people here who just don’t have any Android context to put it in yet.

Okay, we’ve finally finished the description of how that would work.  Now we’re going to hear how to use the ContentProvider API.  Which is apparently much simpler.  Okay, now we have the background to understand all the stuff the Content Provider is doing for us.  And finally, the third option is to use something called a Sync adapter.  Those are new in Android 2.0+, and they are important.  Use them!  They’re battery efficient, but not instantaneous due to queueing.

Well, we’re moving on the Android NDK development kit (for native C/C++ code instead of using Java).  I no longer believe that there will be any hands-on this evening.  Good information, though.  However, I have no intention of programming Android in C/C++.  Still, there are nice advances.  You can now debug NDK application on retail devices, starting with Android 2.2.  But we’re now seeing the original Tetris code (from Russia, from a long time ago) running natively on an Android phone using it.  Cool, but I’m still not going to do it myself.

And that’s it.  No actual hands-on, except what I’ve done myself during gaps in the talks.  But still a great session (even without the free phone, but FroYo is cool.)

July 21, 2010

OSCON Infrastructure

Filed under: Uncategorized — Charles Engelke @ 5:29 pm

The wi-fi is working great today.  (Maybe it wasn’t offered as wireless-N yesterday?  The access point was just called OSCONBG.)  But now the air conditioning is no good.  For the first time I’m uncomfortable.  It must be over 80.  The temperature isn’t so bad, but the air is completely stagnant.


Filed under: Uncategorized — Charles Engelke @ 4:55 pm

Or is is DevOps?  Regardless, it still looks like an ugly name to me.  I’ve been hearing this word for two days now, and nobody ever bothers to define it, nor give the derivation of the word.  From context, the word is clearly combined from Development and Operations, and seems to refer to managing operations for applications deployed in the cloud.

It seems like a good idea to me.  I know I have long found that developers with some operations experience, or at least perspective, really bring a lot to the creation of easily deployed and managed systems.

Mobile Apps with HTML, CSS, and JavaScript

Filed under: Uncategorized — Charles Engelke @ 4:51 pm

This morning’s talk by Jonathan Stark was excellent.  In particular, I’m definitely going to be using jQTouch, and probably will use PhoneGap.  In fact, I think I’d like a version of PhoneGap for Windows – let me deploy a web application inside a native wrapper, with extensions that let me manipulate the machine via JavaScript.

OSCON Begins

Filed under: Uncategorized — Charles Engelke @ 1:43 pm

Yesterday’s Cloud Summit was pretty good.  I didn’t get a lot of detailed, concrete information, but acquired a good overview of what’s going on and how the pieces all need to connect.  But today, the real conference begins.

The morning began with “keynotes”.  I put that in quotes, because they were more like lightning talks.  But since the speakers were mostly executives instead of technical staff, they each needed ten minutes to make their points, not just the five usually allocated to a lightning talk.  The talks were okay, and whenever one dragged it was over soon, but they didn’t add much.  If they just skipped them, and had one or two real, deeply interesting keynotes, for the entire conference, that would be a lot better.  As it is, the conference itself doesn’t get started until nearly 11:00 AM.

For the rest of the morning (actually, the middle of the day) I’m going to attend sessions on programming for mobile devices.  First up is Android, the Whats & Wherefores by Dan Morrill of Google.  I’m already somewhat familiar with Android, so may not see much new.  But the second half the session is about Building Mobile Apps with HTML, CSS, and JavaScript, by Jonathan Stark, and I am very deeply interested in that topic.  I like all mobile platforms, but I don’t want to have to master lots of different technologies.  I think web technologies are mature enough to meet my development needs.

July 20, 2010

Trying OSCON’s Wi-Fi Again

Filed under: Uncategorized — Charles Engelke @ 5:55 pm

Let’s see if things go better this afternoon.  We’re going to have a few debates in the afternoon, starting with one about the importance of open standards for cloud computing.  Sam Johnston of Google (his blog is here) starts the debate, speaking in favor of the importance of open standards.  After he’s had 15 minutes to present his case, Benjamin Black will have 15 minutes to make the opposite case.  Then there will be 15 minutes of back and forth, between the speakers and with a “jury” and audience members.

The “for” argument isn’t very interesting to me, because I already agree with what he’s saying.  I need to hear some contrary information when the opposition comes on.  Which is now starting.  Black starts by pointing to the dysfunctional processes often behind defining and agreeing to standards with a Monty Python video (the fish-slapping dance).  Then: what’s important?  Utility.  If it doesn’t solve my problem, I don’t care about standards.  Then interoperability.  Then being free of vendor lock-in (independence).  Those three aren’t all equal.

Some problems don’t need go past utility.  For example, SQL (in reality, not in theory).  His point seems to be that there is no meaningful interoperability between SQL implementations, yet we still use SQL.  Well, I don’t think I agree with the premise there.  A lack of perfect interoperability doesn’t mean that there isn’t any interoperability!

Suppose something new comes out with massive utility and a lot of imperfection.  People will adopt it rapidly.  Then you get lots of competition and exploration, and lots of “standards” that are all different from each other (think networking in the early days).  Eventually, the different islands begin to interoperate with each other as demanded by their users.  That’s where the cloud is now.  So it’s too early to define what the correct standards should be.

That happens in the next stage: maturation.  That’s where we worry about independence, not earlier.  Successful standards formalize what is already true.  “Standards are side effects of successful technology.” “All successful standards are de facto standards.”

All good points.  But is there nothing in cloud computing ready to benefit from the independence?  His next point is that, even if so, it’s too early.  Because as things become more standardized, the rate of innovation has to drop, and we aren’t ready for that to happen in the cloud.  Very good quote: Standardize too soon, and you lock in to the wrong thing.

Excellent speaker, and I agree with his points.  But not necessarily all his conclusions.  Mainly, I think some cloud issues are more mature that he seems to be saying, and are ready to improve interoperability, and perhaps even independence.  But he makes a great case.

There’s some back and forth and questions next.  It seems that they favor the “against” position.  But it seems that the question has changed a bit over the talk.  Now people are agreeing that a priori standards are bad.  But the question was about whether any standards were needed.

The next debate is on whether Open APIs are enough to prevent vendor lock-in.  George Reese will argue that they are; James Duncan will say that they aren’t.  Of course, the question starts with trying to determine just what would make an API “open”.  But that’s dismissed early on as not the core question.  It seems that the “pro” advocate is arguing against it: even if the APIs are open, if the platform itself isn’t, then you can’t take your top layer and move it elsewhere.

I don’t find this debate very interesting, though.  Nothing really new or useful for me.  But the first debate was excellent.  It’s a good format.

On the plus side, the conference Wi-fi is kind of working now.  It’s not great, but not dead, either.  I notice a lot of non-conference access points are now gone; I wonder if interference, rather than bandwidth, was the major problem.

No more blogging at OSCON

Filed under: Uncategorized — Charles Engelke @ 2:13 pm

I’ve had to switch to my phone because wifi is unusable. They say they have 60Mb/s, which sounds like a lot until you divide it by a few thousand users.

Swype on android is great, but requires too much attention while trying to listen.

OSCON Cloud Summit

Filed under: Uncategorized — Charles Engelke @ 1:31 pm

It’s my first day at OSCON, and I’m starting off with the Cloud Summit.  This post isn’t going to be real reporting or notes on the entire session.  Much of the material isn’t new to me, so I’m just going to record things new to me that I want to be able to remember and find more about later.

The introduction is by Simon Wardley, giving background and introducing the speakers and topics we’ll see the rest of the day.  He compares Cloud Computing to the evolution of electricity as a utility, and references Douglas Parkhill’s 1966 book The Challenge of the Computer Utility.

Mark Masterson is talking about cloud computing and the enterprise.  Risk is the likelihood of failure times the cost of failure.  Business has almost totally focused on the first term in the equation, trying to reduce the likelihood of failure, because they viewed the cost of failure is more or less a fixed amount.  But it’s not, and cloud computing is one thing that can reduce that cost.  If you focus on reducing the odds of failure, you have to lock in your decisions very early, and can’t explore some options.  If instead you accept a higher chance of failure, and pursue options that have a low cost of failure instead, you give yourself more options.  And you manage your risk just as well as the other way.  (This sounds a lot like agile development methodologies to me; don’t try so hard to not fail, instead fail early and cheaply and correct your path as you go.)

Subar Kumaraswamy of eBay is talking next, about Security and Identity in Cloud.  He starts with a slide showing a frightened manager, surrounded by scary orange dangers of using cloud computing: Compliance, Rogue Administrator, Governance, and many others.  But this is followed by a slide of a focused and contented developer, surrounded by soothing green virtues of cloud computing: iterative in hours, self service, empowered developers, and more.  The rest of his talk is focused on one key area: identity management.

On to John Willis.  “Cloudy Operations.”  In a cloudy world, the prime constraint should be the time it takes to restore your data.  Because you can rebuild all the infrastructure on demand, even in case of a disaster.  But you can’t be up and running until your instance data is live.

“Cloud Myths, Schemes and Dirty Little Secrets” by Patrick Kerpan up next.  Focus has been on making the physical virtual – Citrix on the desktop, VMWare on the server or Amazon EC2.  Dirty little secrets.  Ephemeral versus persistent images.  Ephemeral images disappear, but persistent ones too easy to treat like metal, errors accumulate, history of changes lost.  If you have a lot of data in the cloud, you can’t get it out quickly.  State is the monster (sounds like the last talk).

What is generally happening around us now is the beginning of the long, slow migration of the traditional enterprise to agile infrastructure; whether public, private or hybrid cloud.

The last talk before the break is called “Curing Addiction is Easier” by Stephen O’Grady.  Everybody has an API, but what about interoperability?  He mentioned Delta Cloud initiative (or maybe that’s δ-cloud), which is a layer on top of the API, for that reason.  No one has portability, but it’s not really a problem.  Most businesses want a solution now, and will defer worrying about tomorrow if they can solve their problem today.

That concludes the Cloud Scene theme for the day.  Not a lot of new stuff for me, but it’s supposed to set the scene and get us all on the same level.  Between the break and lunch we’ll see some Future Thinking.

July 25, 2008

OSCON Resources

Filed under: OSCON 2008 — Charles Engelke @ 8:27 pm
Tags: , ,

Presentation materials for many talks are now available on the OSCON site.  I expect more will show up in the coming days.  Some conference videos are available, too, and again, I expect more to show up soon.  I can highly recommend Robert Lefkowitz’s keynote video.

Closing day at OSCON

Filed under: OSCON 2008 — Charles Engelke @ 1:44 pm
Tags: , ,

The last day of the conference is a short one, just keynotes (or plenary sessions, I guess) and two technical sessions.  But it’s a strong one.  One of the keynotes was from Tim Bray who raised all sorts of interesting questions about where programming languages are going.  He didn’t have the answers, just questions.

Bray’s session was the only one that I found really interesting, but lots of the attendees were more interested in Sam Ramji‘s talk.  He’s from Microsoft, and was a target of plenty of pointless Microsoft bashing by “questioners”.  Microsoft is taking good actions now, and the only sensible thing to do is wait and watch its behavior going forward, instead of announcing that it will behave badly no matter what.

The sessions I’m going to are Ray Fielding‘s talk on Open Architecture at REST, and Damian Conway‘s The Twilight Perl.  I think they’ll be a very strong close to a good conference.

I skipped the last two OSCONs because the one three years ago was practically worthless, but the program looked so good this year that I gave it another chance.  I’m glad I did.

But I still wish they’d have a more full last day.  It’s short so the west coast folks can catch a late flight home, but the majority of attendees have to fly east, and can’t do that until early tomorrow morning.

July 24, 2008

Closing out the OSCON day

Filed under: OSCON 2008 — Charles Engelke @ 9:31 pm
Tags: , ,

I’m waiting for the evening Perl talks to start, especially Larry Wall’s State of the Onion.  This afternoon I alternated between mostly technical and mostly entertaining talks, though they were all a mix of the two.

I started with Sam Ruby‘s talk about what to expect in Ruby 1.9.  It was a lot of details of his experiences testing existing Ruby modules with the new developers’ release.  There are some small syntactic clean-ups in the language that break a lot of modules.  Most of the fixes are easy, but the module maintainers aren’t always responsive to applying them.  That’s a bit worrying about the way the Ruby language is being handled.

I then went over to Robert Lefkowitz’s talk on open source as a liberal art.  It’s not really something I can put in a nutshell, but it was broad-ranging, thought-provoking, and fun.

After the snack break I learned about the new options for pluggable storage engines for MySql 5.1 from Peter Zaitsev.  He literally wrote the book on high performance MySql, so he really knows his stuff.  I learned a lot, but also ended up with more questions than I started (because I now know enough to ask them).

I then went to the second half of Perl Lightning Talks.  I always love these five minute presentations.  I saw two astounding talks in a row using the new Parrot compiler tools.  The first one defined a brand new language and created a compiler for it.  The second one build an Apache module to run that language persistently.  Each talk completed the entire task and demonstrated the result in their five minute allotment.

Now I’m at the State of the Onion talk, and learning what Larry thinks about Perl 6.  He’s breaking his tradition by actually giving a technical talk about Perl, but he can’t help but be interesting.

Thursday morning at OSCON

Filed under: OSCON 2008 — Charles Engelke @ 4:50 pm
Tags: , ,

I went from Schwern‘s Skimmable Code talk to one on HDFS Under the Hood, given by Sanjay Radai of Yahoo!  It’s a complex topic, and he did a good job of making it all clear.  I don’t think we have a need for HDFS any time soon, but some of the concepts he showed us might fit our needs soon.

At that was my morning.  It’s going to be long afternoon (until about 8:00PM according to the schedule).

Skimmable Code

Filed under: OSCON 2008 — Charles Engelke @ 1:59 pm
Tags: , ,

Schwern is talking about how to make code easier to read.  Some simple things can make your code much more comprehensible, just like addingspacesbetweenwordsmakesiteasiertoreadnaturallanguage.

Lexical encapsulation – basically, making sure that everything related to the current lines of code are visible on the same page (editor window).  Lexical scope is does this for you, so subroutines (which create a new scope) are useful for more than just code reuse.  They also help readability.

This is not news to me.  Back in the 1980s when I taught introduction to programming at UF I instituted an iron-clad rule: no procedure or function could be more than 25 lines of code long.  If any of them even had a 26th line, you lost a third of the score for the assignment.  Have a 31st line?  Lose two-thirds.

What a lot of complaining that led to!  And what a bunch of terribly illogical code decomposition I saw.  But brand new programmers needed that discipline, and by the end of the course their code was much cleaner, easier to read, and (not incidentally) much more likely to work right.  The best programmer I know was a student back then, and told me it was probably the single most useful thing I ever taught.

No links in this post.  Sorry, but the conference network is so bad I can’t even search and open pages to find the right links. handles this really well, saving my content pretty much continuously even though slowly.

July 23, 2008

First OSCON Afternoon

Filed under: OSCON 2008 — Charles Engelke @ 7:49 pm
Tags: , ,

I had a conference call I had to make during the lunch break, but it ended early so I was able to eat.  The food was okay, but the dessert was a berry tart that was great.

But I’m here to learn and be inspired (and not about food) so the rest of the day was mostly attending sessions.  I attended a talk on Code Reviews for Fun and ProfitAlex Martelli of Google was interesting and full of advice.  I think we’re just at the right point to get the most value from his pointers, and we’re going to implement some of them pretty soon if I can convince a few folks.  His slides are available for download; get them and browse them.  He’s got pointers to even more good resources on code reviews.  He even told us that the best book he knows on the subject is available for free from  I’ve ordered it (it really is free) and I’ll post about it when I get it.

During the short gap between sessions I moved downstairs for the talk about Hypertable.  This is an open source project to create a tool similar to Google’s bigtable, and is almost complete (it’s in alpha right now).  There’s a lot of interest in this; more than OSCON expected.  Every chair was taken, and I heard that there were at least 30 people left in the hall that wanted to attend.  The talk was interesting, but I certainly didn’t absorb anywhere near all the material.  Instead, I’ll be researching this further online.  The performance test results they’re getting are amazing.

We had a longer break to give us time to visit the exhibit hall and get some snacks.  The hall was pretty big and there were lots of t-shirt giveaways, just like at RailsConf.  I still view this as a tech economy indicator.  For at least open source related efforts, it’s booming.

Now I’m learning about Google’s open source efforts.  There’s a lot going on there and we’re all reaping the benefits.  After this I’m going to head downstairs again for my final session of the day, an “Illustrated History of Failure.”  And then I’ll try to integrate all the stuff I’ve been exposed to today.  This has been a diverse and very good program so far.

OSCON Begins

Filed under: OSCON 2008 — Charles Engelke @ 4:49 pm
Tags: , ,

The tutorial days are over, and the conference is beginning.  Actually, it began last night with some open source community awards and talks by Mark Shuttleworth, Robert (r0ml) Lefkowitz, and Damian Conway.  I don’t know why they do that at night (until 10:00 PM, or effectively 1:00 AM for us east coast folks) but they were really good.

This morning began with the typical OSCON sponsor keynotes.  These can be good but generally haven’t been at other OSCONs, and I feel their existence is actually disrespectful to paying attendees.  Our attention is not a commodity to be delivered to sponsors.  At RailsConf this year I saw how the conference got the sponsors and attendees together to benefit both sides, and was impressed.  OSCON doesn’t have a good history of doing that, even though the same company manages both conferences.  I wasn’t feeling well, so I slept in a bit and skipped those sponsored keynotes.  My colleagues tell me that I didn’t miss anything.

The first actual session didn’t start until 10:45 AM, and I’m at it now.  I’m starting with a panel discussion on open education.  The participants are extremely diverse, very knowledgeable, and interesting speakers.  The topic doesn’t directly relate to my own work, but it’s important.  The vast majority of people in the world are extremely poor.  They’re just as smart and capable as those of us in the wealthy countries, but don’t have the resources we have that let us prosper.  Knowledge is one of those resources they need, and the foundation of many other needed things.  Computers and the Internet can have a great impact on that, but only if the information itself is available and accessible.  The panel participants are working on making that happen.

My second session (conveniently in the same room) is on Metaprogramming in Ruby.  Now this is extremely germane to my work, so I hope it’s a great talk.  It’s starting well, and deserves my full attention.

And it kept going well and getting better.  Brian Sam-Bodden of Integrallis is a great speaker.  I loved the way he showed live demos – by recording them and including the recordings (at a nice, reasonable speed) in his slides, instead of jumping from window to window.  He’ll be posting his slides on his company’s website soon (probably on this page), and I recommend you go find them.

July 22, 2008

Catalyst: 21st Century Perl Web Development

Filed under: OSCON 2008 — Charles Engelke @ 5:19 pm
Tags: , ,

My last tutorial session is on Catalyst, a Perl MVC web framework, given by Matt Trout.  I’ve bought a book on Catalyst, but haven’t yet read it.  As a Perl programmer I’d normally be extremely excited about using Catalyst, but I’m pretty well sure that Ruby on Rails will fit my next few projects better.  And Ruby on Rails is just such an active technology, growing by leaps and bounds, that there’s an enormous set of resources available for it.  So this talk should be interesting for me, but its usefulness is likely to be seeing concepts I’ll apply elsewhere instead of directly learning Catalyst.

Catalyst leverages lots of tools already on CPAN instead of being one big tightly coupled environment.  It’s MVC by default, but doesn’t have to be.

The approach given in this talk is interesting.  It’s the exact opposite of how introductory Ruby on Rails talks usually go.  We’re jumping right to low-level internals, instead of starting with the proverbial “view from 10,000 feet”.  We’re seeing a lot of Perl code that uses Catalyst modules, and he’s moving very, very quickly.

I’m already lost, and I know Perl well.  I guess if I want to know about Catalyst I’m going to have to read that book, after all!  I’m following the details, but have no idea how to string them together.  I’ll be slipping out of this talk at the break, unless there’s a huge change soon.  The room is packed; there’s a lot of interest in the subject, but I doubt people are getting what they hoped from the talk.

At least I found three of my four tutorials to be pretty good this year.

Secrets of JavaScript Libraries

Filed under: OSCON 2008 — Charles Engelke @ 2:52 pm
Tags: , ,

Another conference morning, another tutorial.  This time, John Resig of Mozilla is talking about JavaScript libraries.  He’s the creator of jQuery as well as having done a lot of work on Firefox and standards.  He’s written one book and working on another.  So, I’m eagerly anticipating the session.

Libraries have in common: advanced use of JavaScript, cross-browser support, best practices. He likes Prototype, jQuery (surprise!), and base2 (which I hadn’t heard of; it adds missing JavaScript/DOM features to bring all browsers up to a common high level).

JavaScript testing can be painfully simple, so there’s no reason not to do it.  For example, he shows an assert that builds a list of results and then adds that list to the page after the page load is completed.  Delayed tests (asynchronous behavior like timeouts) are harder.

Cross-browser code.

Strategies: pick your browsers, know your enemies, write your code.

To pick browsers, he shows a cost/benefit chart for the most popular browsers.  IE 6 and Opera 9.5 are the only ones with costs exceeding benefit.  IE 6 has the highest cost, but a significant amount of benefit (due to its wide use).  IE 7 has the second highest cost, but much less than IE6, and it has even greater benefit.  Then we have Firefox 3 with a very low cost and very high benefit.  Safari 3 and Opera 9.5 are each pretty low cost, but also pretty low benefit.  Yahoo publishes a list of the level of support they provide to various browsers, based on their assessments of costs versus benefit.  Firefox is “A-grade” on every platform; Opera 9.5 is, too, except on Windows Vista.  The Yahoo list covers more than 99% of the actual users they see hitting their sites.

jQuery supports IE, Firefox, Safari and Opera in their previous (by one), current, and next versions.  The work is always on the current version, but they test both back one and forward one version.  But he notes that this strategy is ignoring market share, which may not be the best approach for others.

Knowing your enemies means know what browser bugs and missing features you’re going to have to work with.  But it also means know the mark-up and external code your pages work with.  And even bug fixes can be an enemy, since they can break the code you had to write to work around the old bug.

Browser bugs are generally the main concern here.  Have a very good test suite so you’ll know if library updates cause problems, and also apply the suite to browser pre-releases.  But what is a bug?  It’s only a bug if it’s an error in following specified behavior.  If you use unspecified functionality and it changes from version to version, it’s still not a bug.

External code can cause problems (or you can cause problems in it).  Working to encapsulate your code much more than JavaScript requires is an important way to protect against this.  For example, don’t extend existing objects; someone else might be trying the same thing.  Example: IE creates properties of form elements for each input field that has an id; the name of the property is the value of the id.  So that sometimes overrides standard properties and methods for those form objects.  Since form designers and JavaScript programmers are often different folks, this is particularly vexing.

JavaScript programmers know not to “sniff” browser versions but instead detect browser capabilities.  Feature simulation is a more advanced way to do this than just object detection.  It makes sure not only that an API is available, but also that it’s working as expected.  Write a function that uses the API and examine the result.  Save a flag based on the result for later use.  You don’t care about the what the function is actually doing (so it should be invisible to users and the rest of your code), you just want to know whether the feature it implements is realiable so you can use it for real application code.  You can do more with this method.  Different browsers can do the same things but different ways.  Feature simulation lets you discover which way will work so you can use that capability.

Interesting example of a browser capability that’s hard to detect: IE 7 introduced a native XMLHttpRequest capability in addition to the ActiveX wrapped method previously used.  But the native one had a bug that the ActiveX one didn’t.  So object detection would lead programmers to use the native one (even though the ActiveX method was still available) and be exposed to a new subtle bug.  And there are some problems that just can’t be discovered, like how some code makes the page actually looks, or whether a particular construct will crash a browser (since you’ll never get the result back if you make that happen).

Writing cross-browser code boils down to reduce the number of assumptions you make.  Ideally, assume nothing and check everything you want to use before using it.  But taken to the extreme, that’s unreasonable, so you have to draw the line at a point that works for you.

We’re taking an early half-break, and he’s planning another one later.  I think that’s smart; three hour-long parts works better for most of us than two 90 minute ones.  He’s also giving us a link to presentation slides.  But they’re not the ones for this talk!  They’re relevant, though.  And he’s giving us a corrected URL to the correct slides.  The slides ask us to keep them private, so I’m not linking to them here.

Good JavaScript Code

If you understand three things, you’ll be able to write good JavaScript: functions, objects, and closures.

Functions can be defined much as in other languages, but also as anonymous functions assigned to variables or properties of objects:

function f(){return true;}
var f = function(){return true;}
window.f = function(){return true;}

Traditional named functions are available throughout the scope, even in code earlier than the definition, but functions assigned to variables or properties are only available after the assignment has executed.

Okay… you can have named anonymous functions (anonymous named functions?):

var ninja = {
   yell: function hello(n){
      return n>0 ? yell(n-1)+"a" : "hiy";

But the function name yell is only available within the scope where it’s defined (the definition of ninja).  The property name yell is available whenever you can refer to the object.  They’re the same thing here, but don’t have to remain that way (assignment to other objects, alteration of the ninja object).  The purpose of this is to allow recursion without problems in those cases.  If the function were truly anonymous it would have to call itself relative to the object it’s defined in; but if that’s assigned to a different object, that’s not what we want.

var x = ninja;
ninja = {};

This works if yell is an anonymous named function.  But if it were truly anonymous the second invocation would fail because it would try to call ninja.yell recursively, which no longer exists.

Since functions are full-fledged objects, they can have properties attached to them.  We see an example of self-memoization, where the function saves prior results and returns them instead of recomputing them when called again with the same arguments.

Functions exist in a context, and you can refer to the context as this.  I’m used to seeing that when the function is a method, and this refers to the object.  But it’s usable even for global named functions (this is the global context, so this.something is the global variable something).  The .call() and .apply() methods for all functions set the context of the function as well as invoke it.

Declarations inside a function are private, but you can make them properties of this to make them externally available:

function Secret(){
var f = function(){true;}
this.g = function(){false;}

Outside code can refer to Secret.g, but not Secret.f.  I think.  If I’m following this right.

The last section of the talk covers a wide variety of advanced topics in a lot of small bites.  It’s reminiscent of the extremely advanced Perl talks that Mark-Jason Dominus gives, and it’s interesting, but way above what I expect to be using directly.  I’m much more likely to be the user of a library that requires these techniques inside it, than to write one.

A really excellent, useful talk, even if I did get bogged down a bit at the end.

Next Page »

Blog at