Packet Loss and Latency

 

A basic understanding of how Internet Protocol networks operate will help any decision maker to cut through the marketing spin and weasel words that are too often associated with the speed and performance claims of service providers.

 

Data Packets, TCP and UDP In an IP network, data to be sent to another party is first broken up into data packets. Each packet has a header (like an address label) and a payload (the actual piece of data). A good analogy is to think of data packets just like an Australia Post packet – the label shows who sent it and where it’s going to, and the packet’s contents are what needs to get to the recipient.

Typical packet sizes are around 1500 Bytes. So when you send someone a 5MB digital photo (5,000,000 Bytes), it is broken up into around 3,500 packets and sent one-at-a-time to the other end. The recipient application then re-assembles the packets to display the photo.

Internet Protocol uses two main transport techniques to send the data: TCP and UDP. The best choice to use largely depends on the type of data.

TCP
A device using TCP (for example your PC or a web server) sends one data packet then waits for the recipient to say “yes I got that” before sending another. If the sender does not get a reply (technically known as an acknowledgment), it re-sends the same packet. Because of this send-acknowledgment-send cycle, TCP can guarantee you get the entire file, even if some parts have to be re-sent, so it is nearly always used for data that must arrive 100% intact at the destination, but can afford some time delay (for example a digital photo). TCP is used for the majority of Internet data packets.

Packet loss is a measure of how many packets are sent but don’t make it to the other end, usually shown as a percentage of those sent. If there is packet loss on your connection, a device using TCP will need to resend some of the packets. For example, with 10% packet loss, 10% of the data packets will not reach the recipient and 10% of the acknowledgments will not reach the sender, and around 20% of the packets will need to be resent. So your 5MB file will actually end up being sent as packets totalling 6MB. If the speed of your connection is 2Mbs, your 5MB file should take 20 seconds to transfer, but in this case it takes 24 seconds because what is sent actually adds up to 6MB. The user’s perception therefore is that the links seems to only be running at only 1.6Mbs, even though it may actually be running at 2Mbs.

Latency is a measure of how long it takes for a packet to reach its destination, usually shown in milliseconds (ms). A device using TCP will self-regulate its transmission speed based on how long it takes to receive an acknowledgment from the other end – in other words it will ‘slow down’ if it thinks the network cannot handle a higher speed. This means that even if a connection is rates at 2Mbs, it may not be possible for a TCP data transfer to achieve the full speed if latency is too high.

Many broadband users do not realise that, because TCP is a ‘two-way conversation’, the performance of their download can be impacted by the performance of their upload, and vice-versa. For example, on an ADSL2+ connection, if the upload link is being ‘choked’ by one user sending a large file, acknowledgement packets associated with your download (although small) may be delayed or dropped, causing re-sending of data and a slowing of the download transfer.

UDP
A device using UDP sends data packets one after the other and does not wait for or expect any acknowledgements from the other end. If the data packet does not arrive at the other end, it does not resend it. By removing the acknowledgements, the sending device does not slow down, rather it ‘trusts’ the network to get the packets to the other end. UDP is usually only used for time-critical applications such as VoIP or video conferencing, where there must be no delays, but where a few dropped packets can be tolerated.

If packet loss occurs when UDP is being used, the recipient will be missing blocks of data. If this data is carrying VoIP or video, part of the call or image may be unintelligible. Many VoIP services noticeably degrade above 1% packet loss, and become difficult to use above 2%.

If latency occurs when UDP is being used, the recipient may not receive the packets in time. Many VoIP services noticeably degrade above 100ms of latency, and become difficult to use above 200ms. Further, the sender does not realise latency exists and does not slow down its transmission. This may then introduce packet loss if the link is at or near maximum capacity, further causing difficulties with the application in use.

 

What causes packet loss and latency? Many think of delays (latency) being caused by the physical attributes of the connection (for example, the distance the data has to travel, or the fact that it is ‘wireless’), but in reality this plays only a small part in the overall end-to-end delay. Most connections actually perform at near the speed of light (including wireless), so in theory data should be able to circumnavigate the globe in around 133ms.

However, in reality this not achieved on the Internet because of two primary factors:
(i) the capability of the equipment involved in processing the data packets; and
(ii) the available capacity on the links between those devices.

Consider these examples:

Example 1: ‘Uplink’ speed

If a service provider’s router at the telephone exchange receives data from 100 subscribers at the rate of 2Mbs each, it needs 200Mbs of ‘uplink’ from the exchange to its core network to process 100% of the packets without introducing any delay.

If the uplink speed is only 100Mbs, then the router may receive more packets than it can put on its uplink. In this case, most routers are configured to hold on to the packet for a while and hope that capacity becomes available on the uplink, but if that time expires, then drop the packet. First latency and then packet loss are introduced into this network.

Example 2: too many packets
Most networking equipment is designed for a purpose, market and budget. For example, carrier grade equipment is designed to process large numbers of packets with little delay, whereas an SME firewall may have limited CPU and memory but be fine for speeds to 100Mbs. If a device is being asked to process more packets of data that it is designed to handle, perhaps because of CPU, memory or chipset limitations, it will simple drop those packets it can’t process. For a carrier processing thousands of connections and many gigabits of data per second, the capability of the equipment is critical.

 

It’s the implementation’s fault, not the technology! It’s a simple fact that most packet loss and latency is caused by a lack of bandwidth within the network (either deliberately by way of contention or unintentionally by way of poor capacity planning) or is introduced by under-powered or overworked equipment.

No technology suffers inherently from latency or packet loss – including wireless! It’s nearly all down to the specific implementation of the service provider.

If your provider claims that:

(i) your service is not contended, and
(ii) they have sufficient capacity within their network, and
(iii) their equipment is ‘carrier-grade’

then they should have no problem in providing you with a packet loss and latency guarantee!



Copyright © 2015 Pacific Wireless