Hi everyone, I am trying to create a system that will allow me to update the browsers of each client (person) currently viewing a webpage.

Don't worry so much about why I am wanting to do this, but please critique my 2 solutions, and tell me which one is the most efficient (least amount of server strain) and can provide the fastest response times (lower latency, best simulation of "real time" updates)

Here are my 2 ideas, help me determine which is best please.

Can't insert an image, so please view this link to see the image!

http://neverlettinggo.com/design%20ideas.png

Hi everyone, I am trying to create a system that will allow me to update the browsers of each client (person) currently viewing a webpage.

Don't worry so much about why I am wanting to do this, but please critique my 2 solutions, and tell me which one is the most efficient (least amount of server strain) and can provide the fastest response times (lower latency, best simulation of "real time" updates)

Here are my 2 ideas, help me determine which is best please.

Can't insert an image, so please view this link to see the image!

http://neverlettinggo.com/design%20ideas.png

Your diagrams do not mention of the type of protocol you're using. I'm assuming your socket deamon talks directly to the browser through a TCP connection? ie: you have a browser plugin such as flash on the browser?

Having the server push data to the clients is the best way to go. So a socket deamon would be more suited. However, you can also get HTTP to push data if you use a server that supports it. Usually this would be specialized servers, or specialized modules for HTTP servers. Usually the HTTP push is called Comet. If you are using a browser plugin such as Flash, then you should just go with TCP since you have the support already.

If you'll be handling a lot of users it probably is a better choice to use an existing game server, or IM server. I don't know any game servers, but an XMPP server such as EJabberd (Elang) or OpenFire (Java) can handle a hundred thousand or more TCP connections on one machine. You would just need to write a module to handle your game protocol, which you can define in XML.

An example of a game run over XMPP is http://www.chesspark.com/

Also look into Red5:
http://osflash.org/red5

Thanks, yes I will be using Flash, and for games UDP will be better than TCP, so..

Anyways, I dont understand why you think Option 1 will be better than option 2. Option 1 has to, in an infinite loop, query the database for changes AND send these changes to ALL clients (until if-statements prevent the server from sending changes to SOME clients unnecessarily.

Why is that a better option than just clients querying a database on their own, without the need for a server?

And yes, I am well versed in Comet Programming. :)

Thanks, yes I will be using Flash, and for games UDP will be better than TCP, so..

Anyways, I dont understand why you think Option 1 will be better than option 2. Option 1 has to, in an infinite loop, query the database for changes AND send these changes to ALL clients (until if-statements prevent the server from sending changes to SOME clients unnecessarily.

Why is that a better option than just clients querying a database on their own, without the need for a server?

And yes, I am well versed in Comet Programming. :)

I didn't realize the clients would be querying the database directly in option 2. Do you fully trust your clients? Unless you fully trust your clients, and encrypt traffic, I don't see how you can let clients query the database directly, even if read only.

Usually you'd need an interface there to enforce access control to the database data.

In any case, letting the clients query the database directly will still be more inefficient then limiting the processes that query the database.

If you have 100 clients. The worst case is 100 concurrent queries to the DB in 2. However, in case 1, you only have 1 query for all 100 clients at a time. (keeping things linear here for simplicity)

Another way of putting it. Say your changes are in a table called "events".

In case 2 you have:

select * from events  where id > $last_event_id where client_id = $myid;

You them multiply this by 100 concurrent queries.

For case 1 you have:

select * from events where id > $last_event_id;
$result = get_db_result()
foreach($result as $row)
   dispatch_event_to_client($row->client_id, $row);

Where dispatch_event_to_client() could be spawned or passed to a new process so it doesn't block waiting on network latency.

For the case 2, you have 100 queries waiting on external network latency, compared to 1 query over a local socket.

Usually, if you can avoid the database for event notification, you should. Keep the events in memory and broadcast to the clients. Copy to the database when no more clients need it from memory. That way there is no blocking on event notification, and you still keep the changes for later retrieval.

"Unless you fully trust your clients, and encrypt traffic, I don't see how you can let clients query the database directly, even if read only.
Usually you'd need an interface there to enforce access control to the database data."

You are looking at a top-level diagram, of course there will be an interface for client-database interaction.

And I think you missed something else as well. I have an idea I am trying to turn to life, which involves simulation of real time, which means the LOWEST amount of delay-per-response is absolutely essential to the system.

You say I can just query once and then dispatch, but you seem to have forgotten you need to dispatch to everyone. Which means +1 for server-to-database and +100 for server-to-client. Thats 101 as opposed to the 100 you said earlier.

Modern day servers are designed to get hit with queries constantly, so I don't think we'll have an issue there, but I just hope there isnt too much of a delay in response. When running, I can use some code to start before the query, and then stop after the response is received, and display the time it took per response to benchmark.


I like this option (option 2) because I don't know how to program sockets in PHP, only C++. If they are EASIER than C++ sockets, I think Option 1 would be the best option. (I should add I also don't know about spawning separate threads).

Tell ya what, I'm gonna spend a few hours reading PHP socket server/client tutorials, and try to come up with a basic server that can send a message out to 1 client, then all clients. That's a good start.

Digital-ether, keep it real buddy. Thanks for the help thus far!


And I think you missed something else as well. I have an idea I am trying to turn to life, which involves simulation of real time, which means the LOWEST amount of delay-per-response is absolutely essential to the system.

You say I can just query once and then dispatch, but you seem to have forgotten you need to dispatch to everyone. Which means +1 for server-to-database and +100 for server-to-client. Thats 101 as opposed to the 100 you said earlier.

I did take into account the dispatching of the event to all clients.

select * from events where id > $last_event_id;
$result = get_db_result()
foreach($result as $row)
dispatch_event_to_client($row->client_id, $row);

It is the "dispatch_event_to_client()" function in this case.

It isn't the sending of data to client that I was talking about, that is impossible to remove from the system. The client has to know that an event happened.

What you can remove is the requests. The client does not have to ask for this event. A better way to put it is for 2) you have 100 requests, and 100 responses.

For 1) you have one request, and 100 responses. But most of the time, you only have just one request happening. Note that when you need to send 100 responses, you do so in another process, or thread.

For 2) event if nothing happended, you still have 100 requests and responses.

Modern day servers are designed to get hit with queries constantly, so I don't think we'll have an issue there, but I just hope there isnt too much of a delay in response. When running, I can use some code to start before the query, and then stop after the response is received, and display the time it took per response to benchmark.

Web servers are built to serve large chunks of data, to small amounts of clients. A large amount of resources is allocated for every single request, and response. They are NOT built for sending small amounts of data, to large amounts of clients.

Web Servers are built of HTTP. HTTP is not built for sending lots of messages to small clients, take a look at how much twitter goes down. This is due to the large volume of small updates.

And I think you missed something else as well. I have an idea I am trying to turn to life, which involves simulation of real time, which means the LOWEST amount of delay-per-response is absolutely essential to the system.

This cannot be achieved with a requet/response system. You need a system that sleeps waiting for an event, and then dispatching that event.

I like this option (option 2) because I don't know how to program sockets in PHP, only C++. If they are EASIER than C++ sockets, I think Option 1 would be the best option. (I should add I also don't know about spawning separate threads).

I'm not familiar with C++. I know most people would recommend writing the sockets in C++ rather then PHP. PHP sockets are extremely simple to write however.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.