The content has been moved to PerfMatrix site.
Link: https://www.perfmatrix.com/latency-bandwidth-throughput-and-response-time/
Related Topics:
Link: https://www.perfmatrix.com/latency-bandwidth-throughput-and-response-time/
Related Topics:
- Result Analysis - Basic Level
- Result Analysis - Intermediate Level
- Result Analysis - Advance Level
- Performance Testing Basics
- Performance Engineering Basics
Hello Gagan .. Really a nice explanation .. just a query ,, its written that throughout is inversely propotional to response time .. i think it depends upon the situation ,, sometimes its directly proportional as well .. :) ... correct me , if i am wrong .. Also more or less .. response time = latency + processing time ... same as you explained ..
ReplyDelete"Throughout is inversely proportional to response time" this is an ideal scenario during Steady state. During initialisation (User Ramp-up), you may see increase in throughput as well as response time.
DeleteCorrect: response time = latency + processing time
It's not always Throughout is inversely propertional to response time. initially maximum hardware is available, so if throuhgput increases also response time may not be increased, once the hardware utilization increasing, processing time at server side gradually increases, at that time only Throughput is inversely proportional to Response times. Otherwise it's not mandatory that Throughput is inversely proportional to response time
DeleteAgreed, this is also one of the case:
Delete"initially maximum hardware is available, so if throuhgput increases also response time may not be increased, once the hardware utilization increasing, processing time at server side gradually increases"
what is the difference between response time and transaction response time ?
DeleteResponse time is a generic term. On the other hand transaction response time is specific to particular transaction.
DeleteAny way the definition of response time does not change in any case i.e. the time taken to get the response from a server.
Hi Gagan
ReplyDeleteIn above comment you have mentioned that "Throughout is inversely proportional to response time"
but in article you have mentioned "Response time is directly proportional to throughput. If throughput decreases with increase in response time then it indicates unstability of application/system "
it is confusing me. Could you please explain it
I have just repeated the sentence which is written in query.
DeleteDo not get confused as practically you can think on increasing throughput (amount of data transferred) leads increase in response time. The best scenario is user Ramp-up.
Considering other case, if response time is getting increase and throughput is constant at a level, then possible cause may be network bandwidth issue.
And, if response time is getting increase with decrease in throughput, then reason may be application unable to handle user load, application failure, stuck messages in queue etc.
Latency explanation is totally wrong. In most benchmark tools it's just response delivery time. Benchmark tools doesn't know anything about request processing time, because it depends on server's internals - so they measure just request - response time.
ReplyDeleteHi Marek,
DeleteCould you please let me know, where I have written latency contain server processing time?
As per given definition which is in my blog:
In performance testing term latency of a request is travel time from client to server or server to client.
So latency doesn't consider the processing time? Its just the time taken at the client side and server side to process the request, correct?
ReplyDeleteCorrect
DeleteHi, This is the best performance Testing web site I came across, Thanku so much
ReplyDeleteWelcome!
DeleteAfter reaching 1000 user, throughput(TPS) of application become constant, not increasing with increasing user load, not observing any issue with CPU,Thread configuration also set to max.
ReplyDeleteWhat could be the possible issue?
Hi Perfingo,
DeletePossible reasons could be:
1. Check the network bandwidth.
2. Check the connection pool at the server (may be reached at max)
How can we check latency during Load Test ?
ReplyDeleteRefer:
ReplyDeletehttps://admhelp.microfocus.com/lr/en/12.60-12.62/help/WebHelp/Content/Analysis/109150_toc_Network_Monitor_graphs.htm
how to generate Latency Graph ?Is it possible in Load runner or any other tool.
ReplyDeleteTo generate network latency you need to integrate network monitoring tool so that accurate network latency can be calculated.
DeleteServer to client delay can be measure in LoadRunner via Network Delay Time Graph.