In my previous blog posting, I wrote about what I would expect node.js to do in order to become mature. I introduced the idea of having a framework which lets you define a protocol and some handlers in order to let the developer concentrate on writing useful business software, rather than technical code, in a very similar manner to which Java EE does. And through that posting, I came to learn about a thing called Comet. I had stated that using a non-blocking server wouldn't really be any more useful than a blocking one in a typical web application (HTTP), and so I created an example based on my own protocol and a VOIP server, for streaming binary data to many concurrently connected clients.
I have now read up on Comet and come to realise there is indeed a good case for having a non-blocking server in the web. That case is pushing data back to the client, like for continuously publishing latest stock prices. While this example could be solved using polling, true Comet uses long-polling or even better, full on push. A great introduction I read was here. The idea is that the client makes a call to the server and instead of the server returning data immediately, it keeps the connection open and returns data at some time in the future, potentially many times. This is not a new idea - the term Comet seems to have been invented in about 2006 and the article I refer to above is from 2009. I think I've arrived at this party very late :-)
My new found case of server push for non-blocking HTTP, and a strong curiosity drove me to knuckle down and start coding. Before long, I had a rough implementation of the HTTP protocol for my little framework, and I was able to write an app for my server using handlers that subclassed the
HttpHandler, which was for all intents and purposes, a Servlet.
To get Comet push to work properly, you get the client to "log in" and register itself with the server. In my demo, I didn't check authorisations against a database, like you might do for a real app, but I had the concept of a channel, to which any browser client could subscribe. During this login, the client says which channel it wants to subscribe to, and the server adds the clients non-blocking connection to its model. The server responds using chunked transfer encoding, because that way, the connection stays open, and you don't need to state up front how much data you will send back. At some time in the future, when someone publishes something, the server can use the connection which is still open contained in its model, to push that published data back to the subscribed client, by sending another chunk of data.
The server implementation wasn't too hard, but the client posed a few problems, until I realised that the data was arriving in the ajax client with a ready state of 3, rather than the more usual 4. The ajax client's
onreadystatechange callback function was also given every byte of data in it's
responseText, rather than just the new stuff, so I had some fiddling around until I could get the browser to just append the new stuff to the
innerHTML attribute of a
div on my page. Anyway, after just a few hours, I had an app that worked quite well. But it wasn't entirely satisfactory, partly because as I stated in the previous posting, the server is still a little buggy, especially when the client drops a connection, because for example the browser page is closed. I had also ended up implementing the HTTP protocol for my framework, which seemed to be reinventing the wheel - servlet technology does all this stuff already, and much better than I can hope to do it. One of the reasons that I don't like node.js is that everything is being reinvented.
So, like the article which I referenced above indicated, version 3.0 of Servlets should be able to handle Comet. I downloaded Tomcat 7.0, which has a Servlet 3.0 container, and I ported my app code to proper Servlets. It took a while to work out exactly how to use the new asynchronous parts of servlets because there aren't all that many accurate tutorials out there. The servlet specs (JSR 315) helped a lot. Once I cracked how to use the async stuff properly, I had a really really satisfying solution to my push requirements.
The first step, was to reconfigure Tomcat so that it uses non-blocking (NIO) for its connector protocol. The point of this is that I want to keep a connection open to the client in order to push data to it. I can't rely on the one-thread-per-request paradigm, because context switching and thread memory requirements are likely to be a performance killer. In Tomcat's
server.xml file, I configured the connector's protocol:
rather than the normal:
All you need to do to get Tomcat to turn into an NIO server is change the protocol attribute to the slightly longer class name.
The second step, was to create two servlets. The first
LoginServlet handles the client "logging in" and subscribing to a channel. That servlet looks like this:
The inline comments describe most of the choices I made. I added a listener to the context so that we get an event in the case of a disconnected client, so that we can tidy up our model - the full code is in the ZIP at the end of this article. Importantly, this servlet does no async processing itself. It simply prepares the request and response objects for access at some time in the future. We stick them (via the async context) into a model which is in application scope. That model can then be used by the servlet which receives a request to publish something to a given channel:
Again, there are plenty of comments in the code. In this servlet, we actually do some (potentially) long running task. In many online examples of Servlet 3.0 Async Support, they show handing off work to an Executor. The async context provides the ideal way to do this via the container though, using its
start(Runnable) method. The container implementation then descides how to handle the task, rather than the developer having to worry about spawning threads, which on app servers like WebSphere is illegal and leads to errors. In true Java EE fashion, the developer can concentrate on business code, rather than technicalities.
Something else which might be important in the above code, is that the publishing is done on a different thread. Imagine publishing data to ten thousand clients. In order to finish within a second, each push it going to have to complete in less than a tenth of a millisecond. That means doing something useful like a database lookup is not going to be feasible. On a non-blocking server, we can't afford to take a second to do something, and many would argue a second is eternity in such an environment. The ability to hand off such work to a different thread is invaluable, and sadly, something node.js cannot currently do, although you can hand off the task to a different process, albeit potentially messier than that shown here.
The important parts here are that we need to listen for events on both ready state 3 and 4, rather than the usual 4. While the connection is kept open, the client only receives data in ready state 3, and everytime it receives a chunk, the
ajax.responseText attribute contains all chunks since login, rather than just the newest chunk. This could be bad, if the connection receives tons of data - eventually the browser will run out of memory! You could measure the number of bytes sent to any client on the server, and when its above a given threshold, force the client to disconnect by ending the stream (call the
complete() method on the async context of the relevant client just after publishing a message to it). The client above automatically logs itself back into the server when the server disconnects.
Instead of the reconnect solution for dropped / timed-out connections (which is very similar to long polling) we could add a heartbeat which the server sends to each subscribed client. The heartbeat period would need to be slightly less than the timeout. The exact details could get messy - do you consider sending a heartbeat every second, but only do it to clients who need it? Or do you send it to every client, say every 25 seconds, if the timeout is say 30 seconds? You could use performance tuning to determine if that is better than the reconnecting I have shown above. Then again, a heartbeat is good for culling closed connections, because it tests the connection every so often, and gets an exception if the push fails. And then again, the container informs the listener that we added to the login async context if a disconnect occurs too, so perhaps we don't need a heartbeat - you decide :-)
Now, we need a way to publish data - that's easy - I just type the following URL into the browser, and it sends a GET request to the publish servlet:
Keep refreshing the browser window with that protocol, and the other window almost instantly gets updates, showing the latest message at the bottom. I tested it using Firefox as the subscriber, and Chrome as the publisher.
I haven't checked out scalability, because I have assumed that Tomcat's NIO connector has been well tested and performs well. I'll let someone else play with the scalability of Servlet 3.0 for a push solution. This article is about showing how easy it is to implement Comet push, using Java EE Servlets. Note, it used to be easy too, because servers like Jetty and Tomcat and others provided bespoke Comet interfaces, but now, with the advent of Servlet 3.0, there is a standardised way to do it.
The complete code for this demo is downloadable here.
Copyright © 2011 Ant Kutschera