How to calculate network throughput

Updated April 17, 2017

Network throughput refers to the average data rate of successful data or message delivery over a specific communications link. Network throughput is measured in bits per second (bps). A common misconception on measuring network throughput is that measuring the time it takes to upload or download a large file is the maximum throughput of a network. This method does not take into account communications overhead such as Network receiver window size, machine limitations or network latency. Maximum network throughput equals the TCP window size divided by the round-trip time of communications data packets.

Convert the TCP window size from bytes to bits: 64 KB is the default TCP window size for computers running the Windows operating system. To convert the window size to bits, multiply the number of bytes by eight. 64 KB x 8 = 524,288 bits.

Divide the TCP window size in bits by the network path latency. For this example, use a latency of 60 milliseconds. 524,288 bits / .060 seconds = 8,738,133 bits per second.

Convert the result from step 2 to megabits per second by dividing the result by 1,000,000. In this example, the maximum throughput is 8.738 Mbps maximum network throughput with the main limitation on the network throughput being the high latency of the network connection.

Cite this Article A tool to create a citation to reference this article Cite this Article

About the Author

Based in Memphis, Jackson Lewis has been writing on technology-related material for 10 years with a recent emphasis on golf and other sports. He has been freelance writing for Demand Media since 2008. Lewis holds a Master of Science in computer science from the United States Naval Postgraduate School.