Poor man's network bandwidth detection technique

I am currently working on an online e-learning/collaboration tool that features all the bells and whistles that one would normally expect of such a tool. The basic functioning of the application is fairly straightforward in that it let's a presenter collaborate with a set of participants by passing messages around through a web server. There was this recent requirement where the application had to automatically detect the network bandwidth available to a user and provide appropriate warnings on finding that it is less than the minimum required. The idea is to measure the net time it takes for a message to travel from the presenter to the web server and then from the web server to the participant (and vice versa).

The first thing that you would probably try is:

  • have the presenter put a timestamp on the message that she sends (let's call this PrTS - for presenter timestamp)
  • have the server put another timestamp before forwarding it to the participant (this would be SvrTS - for server timestamp)
  • and finally, let the participant mark the time of receipt of message and do a little arithmetic to figure out the net latency (let's call this PaTs - for participant timestamp); basically the net latency from presenter to participant is a simple matter of subtracting PrTS from PaTS

Perfect. Except that the whole thing comes crashing down when you realize that,

  1. the presenter, the server and the participant could be in different time-zones (but this can of course be handled), and
  2. the presenter and the participant might have set their clocks to the previous (or maybe the next) century!

Basically, this system requires that the clocks on all three computers be absolutely accurate. So we went back to the drawing board and came up with this, IMO nifty little approach:

  • the presenter sends out a message of a fixed size with a timestamp - i.e. with presenter's local time (let's call this PrTS1)
  • the server plonks its timestamp on to the message before forwarding it to the participant (let's call this one SvrTS1)
  • the participant receives the message, marks the time of receipt and just sits pretty (let's call this PaTS1)
  • the presenter, after sending the first message, waits for a random interval (say 5 seconds) and sends out a second message, again with a timestamp (let's call this PrTS2)
  • the server, as before puts its time of receipt on the message and forwards it to the participant (SvrTS2)
  • the participant, upon receiving this second message, records the time of receipt (PaTS2) and does the following arithmetic to figure out the latency

    Presenter to Server latency (PrSvrL) = ( SvrTS2 - SvrTS1 ) - ( PrTS2 - PrTS1 )
    Server to Participant latency (SvrPaL) = ( PaTS2 - PaTS1 ) - ( SvrTS2 - SvrTS1 )
    And finally, Presenter to Participant latency (PrPaL) = PrSvrL + SvrPaL

Now I know this sounds complicated, but really, it isn't. Work it out; it seems to work! :)

[Updated - 27 May, 2006]

Well, some further analysis reveals that this algorithm does not in fact measure latency. What it does measure however is jitter, i.e., variations in latency. We are only measuring the difference between the latencies of the first message and the second message and not the latency itself. Sigh!

comments powered by Disqus