Spot the mistake in the conversation below:
A: “What’s the speed of your broadband connection?”
B: “35 Mbps.”
A: “Mine’s 50 Mbps.”
Answer: Both A and B are speaking about the bandwidth of their broadband connections, not the speed.
Did you guess it? You’re not to blame if you didn't: confusing internet speed for bandwidth is a common mistake. But the truth is that your 35 Mpbs connection measures bandwidth: how much data you can receive every second, not how fast you receive it. The measure of a network’s speed is its latency: how fast a packet of data reaches you once you have requested it.
True internet speeds comes down to a combination of bandwidth and latency: in other words, the capacity of the network to carry all the data we need, and its performance in pushing that data as quickly as possible to the destination.
Let’s say you’re trying to get a group of 50 networking experts from Mumbai to Goa for a conference as quickly as possible. You'll try and find a spacious bus that can transport them all (bandwidth) without having to make repeat trips, but you’ll also want a fast bus that can make the trip in the minimum time (latency). At the end of the day, how fast the whole group arrives in Goa depends on both the bandwidth and the latency.
Should a network architect opt for lowest latency or highest bandwidth? The answer lies between the two and depends on a number of factors such as the type of traffic, type of application, industry and more. For a network to perform at optimum levels, it needs to have the appropriate bandwidth allocated to it, and also be architected for minimum latency.