Packet loss

Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks,[1][2] or network congestion.[3] Packet loss is measured as a percentage of packets lost with respect to packets sent.

The Transmission Control Protocol (TCP) detects packet loss and performs retransmissions to ensure reliable messaging. Packet loss in a TCP connection is also used to avoid congestion and thus produces an intentionally reduced throughput for the connection.

In streaming media and online game applications, packet loss can affect a user's quality of experience (QoE).


The Internet Protocol (IP) is designed according to the end-to-end principle as a best-effort delivery service, with the intention of keeping the logic routers must implement as simple as possible. If the network made reliable delivery guarantees on its own, that would require store and forward infrastructure, where each router devoted a significant amount of storage space to packets while it waited to verify that the next node properly received it. A reliable network would not be able to maintain its delivery guarantees in the event of a router failure. Reliability is also not needed for all applications. For example, with live streaming media, it is more important to deliver recent packets quickly than to ensure that stale packets are eventually delivered. An application or user may also decide to retry an operation that is taking a long time, in which case another set of packets will be added to the burden of delivering the original set. Such a network might also need a command and control protocol for congestion management, adding even more complexity.

To avoid all of these problems, the Internet Protocol allows for routers to simply drop packets if the router or a network segment is too busy to deliver the data in a timely fashion. This is not ideal for speedy and efficient transmission of data, and is not expected to happen in an uncongested network.[4] Dropping of packets acts as an implicit signal that the network is congested, and may cause senders to reduce the amount of bandwidth consumed, or attempt to find another path. For example, using perceived packet loss as feedback to discover congestion, the Transmission Control Protocol (TCP) is designed so that excessive packet loss will cause the sender to throttle back and stop flooding the bottleneck point with data.[5]

Packets may also be dropped if the IPv4 header checksum or the Ethernet frame check sequence indicates the packet has been corrupted. Packet loss can also be caused by a packet drop attack.

Wireless networks

Wireless networks are susceptible to a number of factors that can corrupt or lose packets in transit, such as radio frequency interference (RFI),[6] radio signals that are too weak due to distance or multi-path fading, faulty networking hardware, or faulty network drivers.

Wi-Fi is inherently unreliable and even when two identical Wi-Fi receivers are placed within close proximity of each other, they do not exhibit similar patterns of packet loss, as one might expect.[7]

Cellular networks can experience packet loss caused by, "high bit error rate (BER), unstable channel characteristics, and user mobility."[8] TCP's intentional throttling behavior prevents wireless networks from performing near their theoretical potential transfer rates because unmodified TCP treats all dropped packets as if they were caused by network congestion, and so may throttle wireless networks even when they aren't actually congested.[8]

Network congestion

Network congestion is a cause of packet loss that can affect all types of networks. When content arrives for a sustained period at a given router or network segment at a rate greater than it is possible to send through, there is no other option than to drop packets.[3] If a single router or link is constraining the capacity of the complete travel path or of network travel in general, it is known as a bottleneck. In some cases, packets are intentionally dropped by routing routines,[9] or through network dissuasion technique for operational management purposes.[10]


Packet loss directly reduces throughput for a given sender as some sent data is never received and can't be counted as throughput. Packet loss indirectly reduces throughput as some transport layer protocols interpret loss as an indication of congestion and adjust their transmission rate to avoid congestive collapse.

When reliable delivery is necessary, packet loss increases latency due to additional time needed for retransmission.[lower-alpha 1] Assuming no retransmission, packets experiencing the worst delays might be preferentially dropped (depending on the queuing discipline used), resulting in lower latency overall.


Packet loss may be measured as frame loss rate defined as the percentage of frames that should have been forwarded by a network but were not.[11]

Acceptable packet loss

Packet loss is closely associated with quality of service considerations, and is related to the erlang unit.

The amount of packet loss that is acceptable depends on the type of data being sent. For example, for voice over IP traffic, one commentator reckoned that "[m]issing one or two packets every now and then will not affect the quality of the conversation. Losses between 5% and 10% of the total packet stream will affect the quality significantly."[12] Another described less than 1% packet loss as "good" for streaming audio or video, and 1-2.5% as "acceptable".[13] On the other hand, when transmitting a text document or web page, a single dropped packet could result in losing part of the file, which is why a reliable delivery protocol would be used for this purpose (to retransmit dropped packets).


Packet loss is detected by application protocols such as TCP, but since reliable protocols react to packet loss automatically, when a person such as a network administrator needs to detect and diagnose packet loss, they typically use a purpose-built tool.[14] In addition to special testing tools, many routers have status pages or logs, where the owner can find the number or percentage of packets dropped over a particular period.

For remote detection and diagnosis, the Internet Control Message Protocol provides an "echo" functionality, where a special packet is transmitted that always produces a reply after a certain number of network hops, from whichever node has just received it. Tools such as ping, traceroute, and MTR use this protocol to provide a visual representation of the path packets are taking, and to measure packet loss at each hop.

In some cases, these tools may indicate drops for packets that are terminating in a small number of hops, but not those making it to the destination. For example, routers may give echoing of ICMP packets low priority and drop them preferentially in favor of spending resources on genuine data; this is generally considered an artifact of testing and can be ignored in favor of end-to-end results.[15]

Packet recovery for reliable delivery

The Internet Protocol leaves responsibility for any retransmission of dropped packets, or "packet recovery" to the endpoints - the computers sending and receiving the data. They are in the best position to decide whether retransmission is necessary, because the application sending the data should know whether speed is more important than reliability, whether a message should be re-attempted in whole or in part, whether or not the need to send the message has passed, and how to vary the amount of bandwidth consumed to account for any congestion.

Some network transport protocols such as TCP provide endpoints an easy way to ensure reliable delivery of packets, so that individual applications don't need to implement logic for this themselves. In the event of packet loss, the receiver asks for retransmission or the sender automatically resends any segments that have not been acknowledged.[16] Although TCP can recover from packet loss, retransmitting missing packets causes the throughput of the connection to decrease. This drop in throughput is due to the sliding window protocols used for acknowledgment of received packets. In certain variants of TCP, if a transmitted packet is lost, it will be re-sent along with every packet that had been sent after it. This retransmission causes the overall throughput of the connection to drop.

Protocols such as User Datagram Protocol (UDP) provide no recovery for lost packets. Applications that use UDP are expected to define their own mechanisms for handling packet loss.

Impact of queuing discipline

There are many queuing disciplines used for determining which packets to drop. Most basic networking equipment will use FIFO queuing for packets waiting to go through the bottleneck and they will drop the packet if the queue is full at the time the packet is received. This type of packet dropping is called tail drop. Other full queue mechanisms include random eviction or weighted random eviction. Dropping packets is undesirable as the packet is either lost or must be retransmitted and this can impact real-time throughput; however, increasing the buffer size can lead to bufferbloat which has its own impact on latency and jitter during congestion.

In cases where quality of service is rate limiting a connection, packets may be intentionally dropped in order to slow down specific services to ensure available bandwidth for other services marked with higher importance (like those used in the leaky bucket algorithm). For this reason, packet loss is not necessarily an indication of poor connection reliability or signs of a bandwidth bottleneck.

See also


  1. During typical network congestion, not all packets in a stream are dropped. This means that undropped packets will arrive with low latency compared to retransmitted packets, which arrive with high latency. Not only do the retransmitted packets have to travel part of the way twice, but the sender will not realize the packet has been dropped until it either fails to receive acknowledgement of receipt in the expected order, or fails to receive acknowledgement for a long enough time that it assumes the packet has been dropped as opposed to merely delayed.


  3. Kurose, J.F. & Ross, K.W. (2010). Computer Networking: A Top-Down Approach. New York: Addison-Wesley. p. 36.
  4. Kurose, J.F.; Ross, K.W. (2010). Computer Networking: A Top-Down Approach. New York: Addison-Wesley. pp. 42–43. The fraction of lost packets increases as the traffic intensity increases. Therefore, performance at a node is often measured not only in terms of delay, but also in terms of the probability of packet loss…a lost packet may be retransmitted on an end-to-end basis in order to ensure that all data are eventually transferred from source to destination.
  5. Kurose, J.F. & Ross, K.W. (2010). Computer Networking: A Top-Down Approach. New York: Addison-Wesley. p. 282-283.
  6. David C. Salyers, Aaron Striegel, Christian Poellabauer, Wireless Reliability: Rethinking 802.11 Packet Loss (PDF), retrieved 2019-05-12CS1 maint: multiple names: authors list (link)
  7. David C. Salyers, Aaron Striegel, Christian Poellabauer, Wireless Reliability: Rethinking 802.11 Packet Loss (PDF), retrieved 2019-03-09CS1 maint: multiple names: authors list (link)
  8. Ye Tian; Kai Xu; Nirwan Ansari (March 2005). "TCP in Wireless Environments: Problems and Solutions" (PDF). IEEE Radio Communications. IEEE.
  9. Perkins, C.E. (2001). Ad Hoc Networking. Boston: Addison-Wesley. p. 147.
  10. "Controlling Applications by Managing Network Characteristics" Vahab Pournaghshband, Leonard Kleinrock, Peter Reiher, and Alexander Afanasyev ICC 2012
  11. RFC 1242
  12. Mansfield, K.C. & Antonakos, J.L. (2010). Computer Networking from LANs to WANs: Hardware, Software, and Security. Boston: Course Technology, Cengage Learning. p. 501.
  13. "Archived copy". Archived from the original on 2013-10-10. Retrieved 2013-05-16.CS1 maint: archived copy as title (link)
  14. "Packet Loss Test". Retrieved 2019-04-15.
  15. "Packet loss or latency at intermediate hops". Retrieved 2007-02-25.
  16. Kurose, J.F. & Ross, K.W. (2010). Computer Networking: A Top-Down Approach. New York: Addison-Wesley. p. 242.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.