Category Archives: Events

Google I/O 2011 Event Sells Out in 59 Minutes!

For the past few years, Google’s “I/O” developer conference has been THE conference for many developers to attend. It’s the place where you go to understand the latest and greatest tools and services coming out of Google… and where you can get to participate ahead of everyone else. Two years ago I/O attendees were the first to get Google Wave accounts… last year every attendee got an Android phone…

This year proved no different – the Google I/O 2011 registration opened up this morning … and then sold out less than an hour later!

To put this in perspective… about 5,000+ developers attend the event!

Kudos to Google for having created a conference that so many people want to attend.

The Twitter stream tells the tale…

And this compilation of tweets from Google’s Vic Gundrota shows it, too:

Googleio2011

P.S. Needless to say, some folks who had been eagerly waiting to register weren’t too pleased, particularly given some of the apparent difficulties with getting access to the registration site.

Node.js, Doctor’s Offices and Fast Food Restaurants – Understanding Event-driven Programming

The Frying Dutchman

Are you struggling to understand “event-driven” programming? Are you having trouble wrapping your brain around “blocking” vs “non-blocking” I/O? Or are you just trying to understand what makes Node.js different and why so many people are talking about it? (and why I keep writing about it?)

Try out one of these analogies…

The Doctor’s Office Reception Line Analogy

In the excellent episode 102 of the Herding Code podcast, Tim Caswell relates event-driven programming to standing in the line at a doctor’s office to see the receptionist. Inevitably, at least here in the USA, there are additional forms to fill out with insurance info, privacy releases, etc.

A Traditional Model

In a traditional thread-based system, when you get to the receptionist you stand at the counter for as long as it takes you to complete your transaction. If you have to fill out 3 forms, you would do so right there at the counter while the receptionist just sits there waiting for you. You are blocking her or him from servicing any other customers.

The only real way to scale a thread-based system is to add more receptionists. This, however, has financial implications in that you have to pay more people and physical implications in that you have to make the room for the additional receptionist windows.

Events To The Rescue

In an event-based system, when you get to the window and find out you had to complete additional forms, the receptionist gives you the forms, a clipboard and a pen and tells you to come back when you have completed the forms. You go sit down in the waiting and the receptionist helps the next person in line. You are not blocking her from servicing others.

When you are done with your forms, you get back in line and wait to speak with the receptionist again. If you have done something incorrectly or need to fill out another form, he or she will give you the new form or tell you the correction and you’ll repeat the process of going off, doing your work, and then waiting in line again.

This system is already highly scalable (and is in fact used in most doctor’s offices I visit). If the waiting line starts getting too long, you can certainly add an additional receptionist, but you don’t need to do so at quite the rate of a thread-based system.

The Fast Food Restaurant Analogy

It struck me that this is also very similar to ordering your food at a fast-food restaurant or street vendor.

The thread-based way would be to get to the front of the line, give your order to the cashier and then wait right there until your order was cooked and given to you. The cashier would not be able to help the next person until you got your food and went on your way. Need to service more customers… just add more cashiers!

Of course, we know that fast food restaurants don’t work that way. They are very much event-driven in that they try to make those cashiers as efficient as possible. As soon as you place your order, it’s sent off for someone to fulfill while the cashier is still taking your payment. When you are done paying, you have to step aside because the cashier is already looking to service the next customer. In some restaurants, you might even be given a pager that will flash and vibrate when your order is ready for pickup (My local Panera Bread does this).  The key point is that you are not blocking the receiving of new orders.

When your food is set, the cashier – or someone – will signal you by calling out your name, order number or triggering your pager.  The event of your order being ready causes the person to perform some function/action. (In programming lingo, this would be thought of as a “callback function“.)  You will then go up and get your food.

So What Does This Have To Do With Node.js?

The “traditional” mode of web servers[1] has always been one of the thread-based model. You launch Apache or any other web server and it starts receiving connections. When it receives a connection, it holds that connection open until it has performed the request for the page or whatever other transaction was sent. If it make take a few microseconds to retrieve a page from disk or write results to a database, the web server is blocking on that input/output operation. (This is referred to as “blocking I/O“.) To scale this type of web server, you need to launch additional copies of the server (referred to as “thread-based” because each copy typically requires another operating system thread).

In contrast, Node.js uses an event-driven model where the web server accepts the request, spins it off to be handled, and then goes on to service the next web request. When the original request is completed, it gets back in the processing queue and when it reaches the front of the queue the results are sent back (or whatever the next action is). This model is highly efficient and scalable because the web server is basically always accepting requests because it’s not waiting for any read or write operations. (This is referred to as “non-blocking I/O” or “event-driven I/O“.)

To put it a bit more concretely, consider this process:

  1. You use your web browser to make a request for “/about.html” on a Node.js web server.
  2. The Node server accepts your request and calls a function to retrieve that file from disk.
  3. While the Node server is waiting for the file to be retrieved, it services the next web request.
  4. When the file is retrieved, there is a callback function that is inserted in the Node servers queue.
  5. The Node server executes that function which in this case would render the “/about.html” page and send it back to your web browser.

Now, sure, in this case, it may only take microseconds for the server to retrieve the file, but..

microseconds matter!

Particularly when you are talking about highly-scalable web servers!

This is what makes Node.js different and of such interest right now. Add in the fact that it also uses the very common language of JavaScript, and it is a very easy way for developers to create very fast and very scalable servers.

Do these analogies help? Do have another analogy you use to explain “event-driven programming” or “event-driven I/O”?

[1] While I’m talking about “web servers” here, Node.js lets you write all sorts of different types of servers for many other protocols beyond HTTP. They all have similar issues (blocking vs non-blocking I/O).

Image credit: gerry balding on Flickr