The shared rate limiter maintains the rate limiter tokens in shared memory and is accessed atomically by the incoming connections for tokens and rate limit connections with much more granularity.

The shared rate limiter is created in shared memory, and the rate limiter object will be indexed by key and shared among cores.

Example

The following example shows how configured rate limiters counts are distributed among SEs:

  1. Assume, there are 64 cores and two SEs (se1 and se2). Virtual service is placed on two SEs. The count per second is 201.

  2. You need to divide among the SEs (se1 and se2). That is, 201/2. Hence each SE gets 100 count.

  3. The remaining count after distributing among SEs equally is 1 (i.e, 201 % 2 = 1).

    Note:

    The reminder will be less than the number of SEs.

  4. SEs are sorted based on the UUID.

    (se1, se2) --------sort--------(se1, se2). Reminder 1 is sent to se1.

    So, se1 and se2 gets a count of 101 and 100, respectively. These counters are put in shared memory on a per SE basis.

  5. An atomic counter is used to access the counts in the packet path. Within SE, no count is lost. However, one count might be missed. Effectively it is minimised to (number of SEs on which virtual service is placed) - (counts % number of SEs on which virtual service is placed).

  6. se1 is 101 count and se2 is 100 count. Hence, after 200 requests, 201 request reaches se2 instead of se1. It will rate limit as the remaining one count is with se1.

    Note:

    This is unavoidable as the distribution process happens in the L3 BGP case.

Shared Rate Limiter Library

To overcome the challenges of distributing rate limiter tokens across cores, a new shared rate limiter library maintains the rate limiter tokens in shared memory. It can be accessed atomically by the incoming connections for tokens and rate limit connections with much more granularity.

  1. A shared rate limiter is created in shared memory.

  2. The rate limiter object will be indexed by key and shared among cores.

  3. Applications can use the shared rate limiter library to create and access the shared rate limiter object based on a unique key.

  4. The shared rate limiter can be maintained in shared memory in a hash table.

  5. Users can avoid the timers per rate limiter and do replenishment of tokens based on timestamps.

For instance, there are 64 cores and two SEs (se1 and se2). Virtual service is placed on two SEs.

In a shared limiter the distribution of limits are as shown below:

201 count per sec.

Divide among se's (se1, se2).

201 / 2.

Each se gets 100 count.

201 % 2 = 1. This count is reminder which will be less than number of se's

So now se's are sorted based on name.

(se1, se2) --------sort--------(se1, se2).

So reminder 1 is given to se1.

So se1 gets count 101 and se2 gets count 100.

These counters are put in shared memory on per se basis.

Atomic counter is used to access the counts in packet path.

Within se no count is lost.

1 count might get lost though. Effectively it is minimised to (number of se's on which vs is placed - counts % number of se's on which vs is placed)

which we cant avoid as we dont know how distribution happens in L3 bgp case.

Here for example we gave

100 count - se2

101 count - se1.

But after 200 requests,m 201 request reaches se2 instead of se1 it will rate limit as remaining 1 count is with se1.