Generic cell rate algorithm

From Wikipedia, the free encyclopedia

The generic cell rate algorithm (GCRA) is a leaky bucket-type scheduling algorithm for the network scheduler that is used in Asynchronous Transfer Mode (ATM) networks.[1][2] It is used to measure the timing of cells on virtual channels (VCs) and or Virtual Paths (VPs) against bandwidth and jitter limits contained in a traffic contract for the VC or VP to which the cells belong. Cells that do not conform to the limits given by the traffic contract may then be re-timed (delayed) in traffic shaping, or may be dropped (discarded) or reduced in priority (demoted) in traffic policing. Nonconforming cells that are reduced in priority may then be dropped, in preference to higher priority cells, by downstream components in the network that are experiencing congestion. Alternatively they may reach their destination (VC or VP termination) if there is enough capacity for them, despite them being excess cells as far as the contract is concerned: see priority control.

The GCRA is given as the reference for checking the traffic on connections in the network, i.e. usage/network parameter control (UPC/NPC) at user–network interfaces (UNI) or inter-network interfaces or network-network interfaces (INI/NNI) .[3] It is also given as the reference for the timing of cells transmitted (ATM PDU Data_Requests) onto an ATM network by a network interface card (NIC) in a host, i.e. on the user side of the UNI .[3] This ensures that cells are not then discarded by UPC/NCP in the network, i.e. on the network side of the UNI. However, as the GCRA is only given as a reference, the network providers and users may use any other algorithm that gives the same result.

Description of the GCRA[edit]

Figure 1: Equivalent versions of the generic cell rate algorithm

The GCRA is described by the ATM Forum in its User-Network Interface (UNI)[1] and by the ITU-T in recommendation I.371 Traffic control and congestion control in B-ISDN .[2] Both sources describe the GCRA in two equivalent ways: as a virtual scheduling algorithm and as a continuous state leaky bucket algorithm (figure 1).

Leaky bucket description[edit]

The description in terms of the leaky bucket algorithm may be the easier of the two to understand from a conceptual perspective, as it is based on a simple analogy of a bucket with a leak: see figure 1 on the leaky bucket page. However, there has been confusion in the literature over the application of the leaky bucket analogy to produce an algorithm, which has crossed over to the GCRA. The GCRA should be considered as a version of the leaky bucket as a meter rather than the leaky bucket as a queue.

However, while there are possible advantages in understanding this leaky bucket description, it does not necessarily result in the best (fastest) code if implemented directly. This is evidenced by the relative number of actions to be performed in the flow diagrams for the two descriptions (figure 1).

The description in terms of the continuous state leaky bucket algorithm is given by the ITU-T as follows: "The continuous-state leaky bucket can be viewed as a finite capacity bucket whose real-valued content drains out at a continuous rate of 1 unit of content per time unit and whose content is increased by the increment T for each conforming cell... If at a cell arrival the content of the bucket is less than or equal to the limit value τ, then the cell is conforming; otherwise, the cell is non-conforming. The capacity of the bucket (the upper bound of the counter) is (T + τ)" .[2] It is worth noting that because the leak is one unit of content per unit time, the increment for each cell T and the limit value τ are in units of time.

Considering the flow diagram of the continuous state leaky bucket algorithm, in which T is the emission interval and τ is the limit value: What happens when a cell arrives is that the state of the bucket is calculated from its state when the last conforming cell arrived, X, and how much has leaked out in the interval, taLCT. This current bucket value is then stored in X' and compared with the limit value τ. If the value in X' is not greater than τ, the cell did not arrive too early and so conforms to the contract parameters; if the value in X' is greater than τ, then it does not conform. If it conforms then, if it conforms because it was late, i.e. the bucket empty (X' <= 0), X is set to T; if it was early, but not too early, (τ >= X' > 0), X is set to X' + T.

Thus the flow diagram mimics the leaky bucket analogy (used as a meter) directly, with X and X' acting as the analogue of the bucket.

Virtual scheduling description[edit]

The virtual scheduling algorithm, while not so obviously related to such an easily accessible analogy as the leaky bucket, gives a clearer understanding of what the GCRA does and how it may be best implemented. As a result, direct implementation of this version can result in more compact, and thus faster, code than a direct implementation of the leaky bucket description.

The description in terms of the virtual scheduling algorithm is given by the ITU-T as follows: "The virtual scheduling algorithm updates a Theoretical Arrival Time (TAT), which is the 'nominal' arrival time of the cell assuming cells are sent equally spaced at an emission interval of T corresponding to the cell rate Λ [= 1/T] when the source is active. If the actual arrival time of a cell is not 'too early' relative to the TAT and tolerance τ associated to the cell rate, i.e. if the actual arrival time is after its theoretical arrive time minus the limit value (ta > TATτ), then the cell is conforming; otherwise, the cell is nonconforming" .[2] If the cell is nonconforming then TAT is left unchanged. If the cell is conforming, and arrived before its TAT (equivalent to the bucket not being empty but being less than the limit value), then the next cell's TAT is simply TAT + T. However, if a cell arrives after its TAT, then the TAT for the next cell is calculated from this cell's arrival time, not its TAT. This prevents credit building up when there is a gap in the transmission (equivalent to the bucket becoming less than empty).

This version of the algorithm works because τ defines how much earlier a cell can arrive than it would if there were no jitter: see leaky bucket: delay variation tolerance. Another way to see it is that TAT represents when the bucket will next empty, so a time τ before that is when the bucket is exactly filled to the limit value. So, in either view, if it arrives more than τ before TAT, it is too early to conform.

Comparison with the token bucket[edit]

The GCRA, unlike implementations of the token bucket algorithm, does not simulate the process of updating the bucket (the leak or adding tokens regularly). Rather, each time a cell arrives it calculates the amount by which the bucket will have leaked since its level was last calculated or when the bucket will next empty (= TAT). This is essentially replacing the leak process with a (realtime) clock, which most hardware implementations are likely to already have.

This replacement of the process with an RTC is possible because ATM cells have a fixed length (53 bytes), thus T is always a constant, and the calculation of the new bucket level (or of TAT) does not involve any multiplication or division. As a result, the calculation can be done quickly in software, and while more actions are taken when a cell arrives than are taken by the token bucket, in terms of the load on a processor performing the task, the lack of a separate update process more than compensates for this. Moreover, because there is no simulation of the bucket update, there is no processor load at all when the connection is quiescent.

However, if the GCRA were to be used to limit to a bandwidth, rather than a packet/frame rate, in a protocol with variable length packets (Link Layer PDUs), it would involve multiplication: basically the value added to the bucket (or to TAT) for each conforming packet would have to be proportionate to the packet length: whereas, with the GCRA as described, the water in the bucket has units of time, for variable length packets it would have to have units that are the product of packet length and time. Hence, applying the GCRA to limit the bandwidth of variable length packets without access to a fast, hardware multiplier (as in an FPGA) may not be practical. However, it can always be used to limit the packet or cell rate, as long as their lengths are ignored.

Dual Leaky Bucket Controller[edit]

Multiple implementations of the GCRA can be applied concurrently to a VC or a VP, in a dual leaky bucket traffic policing or traffic shaping function, e.g. applied to a Variable Bit Rate (VBR) VC. This can limit ATM cells on this VBR VC to a Sustained Cell Rate (SCR) and a Maximum Burst Size (MBS). At the same time, the dual leaky bucket traffic policing function can limit the rate of cells in the bursts to a Peak Cell Rate (PCR) and a maximum Cell Delay Variation tolerance (CDVt): see Traffic Contract#Traffic Parameters.

Figure 2: Example cell timings on a VBR connection

This may be best understood where the transmission on an VBR VC is in the form of fixed length messages (CPCS-PDUs), which are transmitted with some fixed interval or the Inter Message Time (IMT) and take a number of cells, MBS, to carry them; however, the description of VBR traffic and the use of the dual leaky bucket are not restricted to such situations. In this case, the average cell rate over the interval of IMT is the SCR (=MBS/IMT). The individual messages can be transmitted at a PCR, which can be any value between the bandwidth for the physical link (1/δ) and the SCR. This allows the message to be transmitted in a period that is smaller than the message interval IMT, with gaps between instances of the message.

Figure 3: Reference algorithm for Sustainable Cell Rate (SCR) and Peak Cell Rate (PCR) for CLP = 0 + 1 cell flow

In the dual leaky bucket, one bucket is applied to the traffic with an emission interval of 1/SCR and a limit value τSCR that gives an MBS that is the number of cells in the message: see leaky bucket#Maximum burst size. The second bucket has an emission interval of 1/PCR and a limit value τPCR that allows for the CDV up to that point in the path of the connection: see leaky bucket#Delay Variation Tolerance. Cells are then allowed through at the PCR, with jitter of τPCR, up to a maximum number of MBS cells. The next burst of MBS cells will then be allowed through starting MBS x 1/SCR after the first.

If the cells arrive in a burst at a rate higher than 1/PCR (MBS cells arrive in less than (MBS - 1)/PCR - τPCR), or more than MBS cells arrive at the PCR, or bursts of MBS cells arrive closer than IMT apart, the dual leaky bucket will detect this and delay (shaping) or drop or de-prioritize (policing) enough cells to make the connection conform.

Figure 3 shows the reference algorithm for SCR and PCR control for both Cell Loss Priority (CLP) values 1 (low) and 0 (high) cell flows, i.e. where the cells with both priority values are treated the same. Similar reference algorithms where the high and low priority cells are treated differently are also given in Annex A to I.371 .[2]

See also[edit]

References[edit]

  1. ^ a b ATM Forum, The User Network Interface (UNI), v. 3.1, ISBN 0-13-393828-X, Prentice Hall PTR, 1995.
  2. ^ a b c d e ITU-T, Traffic control and congestion control in B ISDN, Recommendation I.371, International Telecommunication Union, 2004, Annex A, page 87.
  3. ^ a b ITU-T, Traffic control and congestion control in B ISDN, Recommendation I.371, International Telecommunication Union, 2004, page 17