Pages

Thursday, 23 February 2017

Why so confused with Latency, Bandwidth, Throughput and Response Time

20 comments:

  1. Hello Gagan .. Really a nice explanation .. just a query ,, its written that throughout is inversely propotional to response time .. i think it depends upon the situation ,, sometimes its directly proportional as well .. :) ... correct me , if i am wrong .. Also more or less .. response time = latency + processing time ... same as you explained ..

    ReplyDelete
    Replies
    1. "Throughout is inversely proportional to response time" this is an ideal scenario during Steady state. During initialisation (User Ramp-up), you may see increase in throughput as well as response time.

      Correct: response time = latency + processing time

      Delete
    2. It's not always Throughout is inversely propertional to response time. initially maximum hardware is available, so if throuhgput increases also response time may not be increased, once the hardware utilization increasing, processing time at server side gradually increases, at that time only Throughput is inversely proportional to Response times. Otherwise it's not mandatory that Throughput is inversely proportional to response time

      Delete
    3. Agreed, this is also one of the case:
      "initially maximum hardware is available, so if throuhgput increases also response time may not be increased, once the hardware utilization increasing, processing time at server side gradually increases"

      Delete
    4. what is the difference between response time and transaction response time ?

      Delete
    5. Response time is a generic term. On the other hand transaction response time is specific to particular transaction.

      Any way the definition of response time does not change in any case i.e. the time taken to get the response from a server.

      Delete
  2. Hi Gagan
    In above comment you have mentioned that "Throughout is inversely proportional to response time"

    but in article you have mentioned "Response time is directly proportional to throughput. If throughput decreases with increase in response time then it indicates unstability of application/system "

    it is confusing me. Could you please explain it

    ReplyDelete
    Replies
    1. I have just repeated the sentence which is written in query.

      Do not get confused as practically you can think on increasing throughput (amount of data transferred) leads increase in response time. The best scenario is user Ramp-up.

      Considering other case, if response time is getting increase and throughput is constant at a level, then possible cause may be network bandwidth issue.

      And, if response time is getting increase with decrease in throughput, then reason may be application unable to handle user load, application failure, stuck messages in queue etc.

      Delete
  3. Latency explanation is totally wrong. In most benchmark tools it's just response delivery time. Benchmark tools doesn't know anything about request processing time, because it depends on server's internals - so they measure just request - response time.

    ReplyDelete
    Replies
    1. Hi Marek,

      Could you please let me know, where I have written latency contain server processing time?

      As per given definition which is in my blog:
      In performance testing term latency of a request is travel time from client to server or server to client.

      Delete
  4. So latency doesn't consider the processing time? Its just the time taken at the client side and server side to process the request, correct?

    ReplyDelete
  5. Hi, This is the best performance Testing web site I came across, Thanku so much

    ReplyDelete
  6. After reaching 1000 user, throughput(TPS) of application become constant, not increasing with increasing user load, not observing any issue with CPU,Thread configuration also set to max.

    What could be the possible issue?

    ReplyDelete
    Replies
    1. Hi Perfingo,

      Possible reasons could be:

      1. Check the network bandwidth.
      2. Check the connection pool at the server (may be reached at max)

      Delete
  7. How can we check latency during Load Test ?

    ReplyDelete
  8. Refer:
    https://admhelp.microfocus.com/lr/en/12.60-12.62/help/WebHelp/Content/Analysis/109150_toc_Network_Monitor_graphs.htm

    ReplyDelete
  9. how to generate Latency Graph ?Is it possible in Load runner or any other tool.

    ReplyDelete
    Replies
    1. To generate network latency you need to integrate network monitoring tool so that accurate network latency can be calculated.

      Server to client delay can be measure in LoadRunner via Network Delay Time Graph.

      Delete