Paper: Random Early Detection Gateways for Congestion Avoidance
Authors: Sally Floyd and Van Jacobson
This paper using a new algorithm in network gateways for signaling to hosts that
congestion is occurring, with the goal of improving the overall throughput of
the network and reducing delay. The authors refer to this new algorithm is as
Random Early Detection, or RED. It accomplishes its goals by controlling the
average queue size of the gateway. Secondary goals were to avoid global
synchronization, to avoid bias against bursty traffic, and to maintain an upper
bound on the average queue size.
The RED algorithm uses several means to accomplish these goals. First it
maintains an upper and lower threshold for the queue size and on each packet
received calculates an average queue size via a low pass filter. It then uses a
packet marking mechanism to notify hosts when congestion is occurring or about
to occur. When the average queue size is less than the low threshold, no marking
occurs. When the average is between the low and high thresholds, packets are
randomly marked, with the number of packets marked set according to how close to
the high threshold the average is. When the average is over the high threshold,
all packets are marked. To fit into the current TCP transport congestion
mechanism, the marking scheme used in the paper is to simply drop marked
After describing the algorithm, the paper goes on to discuss a simulator that
the authors used to test RED and other gateway queuing algorithms. The results
obtained by simulation were used to discuss effective ways to set the various
RED algorithm parameters and to show RED results relative to the other
algorithms. The primary conclusions I made from these discussions were that when
properly set up, RED can meet its goals in a TCP context, and setting up the RED
parameters may be difficult.
One of the most interesting aspects of this algorithm was its performance in
bursty situations. Relative to other gateway queuing algorithms, RED allows
better handling of disadvantaged nodes when the traffic load is relatively high.
In particular, the paper describes a node that has a much longer round trip time
and smaller window than other nodes in a network. The simulator showed that in
random drop or drop tail gateways, packets from such a disadvantaged node had a
greater probability of being dropped when network traffic is high. This was not
the case when RED is used.
In general, the RED approach seems to be a form of link-layer congestion control
that is taking advantage of some knowledge of TCP congestion control mechanisms
to increase the performance of a gateway. The main question I had with this
approach was: how well does it work in a non-homogenous transport environment,
which is reality on the Internet. That is, are the superior results achieved by
RED still meaningful when other transports other than TCP are in use? For
example, some of the bursty traffic mentioned in the paper are variable bit rate
video and other forms of interactive traffic. On the Internet, however, the
transport used for such traffic may not be TCP, thus the congestion control
signals generated by RED may not have the desired affect. Although the paper did
mention that RED is appropriate for a mixed network, I would have liked to see
more supporting evidence that is will really work well.
In addition to running a simulated network, it would be interesting to see the
results of an actual network using gateways running the RED algorithm.