Home
Reading
Searching
Subscribe
Sponsors
Statistics
Posting
Contact
Spam
Lists
Links
About
Hosting
Filtering
Features Download
Marketing
Archives
FAQ
Blog
 
Gmane
From: Jan Kneschke <jan <at> kneschke.de>
Subject: Re: Comet: Beyond AJAX
Newsgroups: gmane.comp.web.lighttpd
Date: Monday 10th April 2006 18:15:18 UTC (over 11 years ago)
On Mon, Apr 10, 2006 at 11:32:37AM -0500, David Phillips wrote:
> On 4/10/06, Ryan Schmidt  wrote:
> > Someone asked how lighty compared, since it also doesn't use OS
> > threads, but no answer was provided, so I put the question to the
> > list whether anybody has any information or thoughts on the subject.
> 
> Event based web servers should be able to handle at least thousands of
> idle connections on operating systems with scalable event
> notifications APIs (epoll on Linux, kqueue on BSD).
> 
> The first issue that comes to mind is timeouts.  You mention 30-60
> seconds.  What happens after that?  The client/server times out and a
> new request is made?
> 
> The bigger issue is the application, not the web server.  For example,
> using FastCGI PHP under lighttpd, you are going to have a separate PHP
> process tied up for each request, just like with Apache.  In order to
> support this type of request model, your application also needs to be
> event driven.  This might be easier in an integrated environment like
> Twisted.  I'm not sure if the FastCGI protocol supports this, and if
> so, how well it is supported by existing web servers.
> 
> Getting back to the timeout issue, with such short timeouts, it might
> be best to simply issue a refresh request every 30-60 seconds.  It is
> likely that the server connection will already be open as a
> keep-alive, due to the frequency other AJAX requests.  In this case
> you will actually have fewer connections.

My idea on this is to decouple the request from the COMET stream.
AFAI understand COMET it is a 'one-receiver-multiple-senders' concept.
The channel (a HTTP-response) is kept-open while the server/app can send
multiple responses even without browser interaction to the client.

This is what a (de)multiplexer does and what is the normal behaviour in
a chat application. So let's take a chat is the basic example and let's
see how to implement it:

You have 10 users and (to simplify) 1 reader. The users send messages to
a chat-app which sends them out to the readers. HTTP can't send a
response to another client and a FastCGI app is also tied to a
requesting HTTP connections.

What to do ? Bind a FastCGI to each connection which is held open for
each open HTTP-connection and distribute the messages between the
FastCGI backends ? No, you don't have enough memory.

All you need is a way to let your FastCGI backend distribute the
responses to multiple connections. Welcome to the multiplexer or better
mod_multiplex:

The connection comes in, mod_multiplex gives the connection a token
(MULTIPLEX-TOKEN: ) which can be used by the backend to send
a response to a connection.

The response from the backend is: 

Status: 200
Content-Type: application/x-multiplex; boundary=foobar

--foobar 
X-Multiplex-Sent-To: 
...

mod_multiplex jumps in again, takes the content, splits it up into
pieces (one for each connection/token) and takes care of feeding the
right connections. The FastCGI would just run to generate the content
and would die afterwards, the HTTP-channel would also be maintained by
the webserver while the FastCGI backend could handle another request for
another connection.

For the Server-PUSH it is now just a problem to get access to the tokens
to send something to a waiting client.

    Jan
  
-- 
Jan Kneschke                                     http://jan.kneschke.de/
Perhaps you want to say 'thank you, jan':    http://jk.123.org/wishlist/
 
CD: 4ms