How to calculate network throughput

Written by jackson lewis

Network throughput refers to the average data rate of successful data or message delivery over a specific communications link. Network throughput is measured in bits per second (bps). A common misconception on measuring network throughput is that measuring the time it takes to upload or download a large file is the maximum throughput of a network. This method does not take into account communications overhead such as Network receiver window size, machine limitations or network latency. Maximum network throughput equals the TCP window size divided by the round-trip time of communications data packets.

Convert the TCP window size from bytes to bits: 64 KB is the default TCP window size for computers running the Windows operating system. To convert the window size to bits, multiply the number of bytes by eight. 64 KB x 8 = 524,288 bits.

Divide the TCP window size in bits by the network path latency. For this example, use a latency of 60 milliseconds. 524,288 bits / .060 seconds = 8,738,133 bits per second.

Convert the result from step 2 to megabits per second by dividing the result by 1,000,000. In this example, the maximum throughput is 8.738 Mbps maximum network throughput with the main limitation on the network throughput being the high latency of the network connection.

  • All types
  • Articles
  • Slideshows
  • Videos
  • Most relevant
  • Most popular
  • Most recent

No articles available

No slideshows available

No videos available

By using the site, you consent to the use of cookies. For more information, please see our Cookie policy.