Dazah API uses Redis to handle rate limiting. The goal is to limit every each client_id/user_id pair from making over 5,000 requests every 5 minutes. We use Codeigniter 3.x and it looks something like this:

$flood_control = $CI->cache->get("user_limit:{$token_obj->client_id}:{$token_obj->user_id}");

if ($flood_control === false)
{
    $CI->cache->save("user_limit:{$token_obj->client_id}:{$token_obj->user_id}", 0, 300);
}
else if ($flood_control > 5000)
{
    // Flood control is over the rate limit
    print_json(array(
        'status' => 'token_limit',
        'error' => 'Rate limit exceeded: Please try your request again in a few minutes.',
    ));
}
else
{
    // Record the request for flood control
    $CI->cache->increment("user_limit:{$token_obj->client_id}:{$token_obj->user_id}");
}

From my understanding, when you increment a Redis key, it keeps the original time to live of when the key was initially set.

Unfortunately poor cereal keeps being locked out of his DaniWeb account. Upon investigating, the client_id/user_id pair that couples the DaniWeb app with cereal's user ID had its Redis value at 5001 and he was being cut off. However, a deeper investigation showed that it's not the case that he had DaniWeb open on multiple computers, etc., or was making an absurd about of requests. In fact, I had made about 10X the number of requests that he had over the past 24 hours, and I've never suffered from this problem!

So then I started logging and found that there are a very small handful of other users besides cereal that are also suffering from the same problem. A google search pointed me to a bunch of race conditions that can cause issues with API rate limiters, but I can't seem to wrap my mind around if a race condition could be at play here?

I temporarily increased the limit to 10K and have been keeping an eye on the logs I created. It is not the case that cereal's Redis key keeps increasing, increasing, increasing. In fact, it's completely hovering in the 5000-5100 range, going both up and down.

There is one strange thing that I noticed from my recent logging of both the current value of the key as well as when it's set to expire. I started logging the difference between the expiry time and the current time(). For all normal keys, the expiry time is anywhere from 1-4 minutes away, which makes sense for keys that are set to reset every 5 minutes. However, for the keys having an issue, the expiry time is always -1 second behind the current time ... as in, when I look up a key, why is it telling me its value is 5100 and it expired a moment ago, instead of restarting it?!

The only thing I can think of doing at this point is adding logic that if the expiry time is in the past, reset the value to 0 and begin anew. But why is this happening in the first place?!

Recommended Answers

All 5 Replies

Here's what the log looks like right now, captured over the past 30 seconds or so. It's in the format value => TTL

20 => 202
8 => 213
5290 => -1
5431 => -1
30 => 79
21 => 200
54 => 31
5291 => -1
8 => 210
16 => 130
9 => 207
22 => 195
55 => 25
0 => 300
1 => 300
10 => 204
5292 => -1
5432 => -1
31 => 69
23 => 190
56 => 21
9 => 200
17 => 120
24 => 185
57 => 15

And it looks something like this:

$CI->cache->get_metadata("user_limit:{$token_obj->client_id}:{$token_obj->user_id}")['value'] . ' => ' .
(intval($CI->cache->get_metadata("user_limit:{$token_obj->client_id}:{$token_obj->user_id}")['expire']) - time())

So now I'm doing this ...

            $flood_control = $CI->cache->get_metadata("user_limit:{$token_obj->client_id}:{$token_obj->user_id}");

            if ($flood_control === false OR (intval($flood_control['expire']) - time() <= 0))
            {
                $CI->cache->save("user_limit:{$token_obj->client_id}:{$token_obj->user_id}", 0, 300);
            }
            else if ($flood_control['value'] > 5000)
            {
                // Flood control is over the rate limit
                print_json(array(
                    'status' => 'token_limit',
                    'error' => 'Rate limit exceeded: Please try your request again in a few minutes.',
                ));
            }
            else
            {
                // Record the request for flood control
                $CI->cache->increment("user_limit:{$token_obj->client_id}:{$token_obj->user_id}");
            }

Why I need to do it this way, I have no idea. :(

I'm following the Redis, PHPRedis documentation and the CI driver to see if there is any discrepancy that could lead those results. But as far as I'm seeing it seems all fine in your code.

Only few notes on PHPRedis, I don't know if it helps you:

A) set() used by CI save() switches to setex() if the value is an integer, I don't know if this changes something for you or if you are using an old release of Redis or PHPRedis that could generate a race condition, for example the behaviour of this method changes between PHPRedis 2.2.7 and 2.2.8;

B) delete() in the latest stable CI driver is doing:

/**
 * Delete from cache
 *
 * @param   string  $key    Cache key
 * @return  bool
 */
public function delete($key)
{
    if ($this->_redis->delete($key) !== 1)
    {
        return FALSE;
    }

    if (isset($this->_serialized[$key]))
    {
        $this->_serialized[$key] = NULL;
        $this->_redis->sRemove('_ci_redis_serialized', $key);
    }

    return TRUE;
}

but on PHPRedis the delete() method can accept multiple parameters: as string or as array:

$r->delete('key_1', 'key_2');
$r->delete(['key_1', 'key_2']);

And in both cases it will return 2 not 1 nor a boolean. So, IMHO, this statement:

if ($this->_redis->delete($key) !== 1)

is not always correct: the statement is TRUE as long it does not return 1, i.e. when:

  • it returns 0 because the key does not exists;
  • or when, and if, you submit an array with multiple keys.

Now, I don't know if you are using the delete() method somewhere in the code, but if an array is submitted, with that condition, then the deletion is skipped.

Now, I don't know if you are using the delete() method somewhere in the code, but if an array is submitted, with that condition, then the deletion is skipped.

Not using delete. Flood control is just the code I posted here. So far, so good, it seems, according to the log I kept overnight since yesterday afternoon. How about you? Any issues?

No, zero issues until now, all is fine.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.