Skip to content

Rate Limiter

Chau Nguyen edited this page Jul 22, 2017 · 33 revisions
  1. Method Rate Limiting
  2. Burst
  3. Spread

Method Rate Limiting

Riot Method Limits

kindred-api now respects method the above rate-limits in 2.0.69. It's set by default with initialization, and the defaults are the values in the table of Riot Method Limits above.

edit July 21: As of 2.0.69 it does NOT support method rate limiting per region. This was an oversight. Will fix in 2.0.70. :) edit July 21: As of 2.0.70 it does support method rate limiting per region.

const k = new Kindred({
  key: DEV_KEY,
  defaultRegion: REGIONS.NORTH_AMERICA,
  limits: [[20, 1], [100, 120]],
  debug: true,
  showKey: true,
  methodLimits: {
    [METHOD_TYPES.GET_SUMMONER_BY_NAME]: 1 // 1 request per region per 10 seconds
  }
})
let count = 4
function counter(err, data) {
  if (err) console.error(err)
  else if (--count === 0) console.timeEnd('done')
}
console.time('done')
k.Summoner.get({ name: 'Contractz' }, counter)
k.Summoner.get({ name: 'Contractz' }, counter)
setTimeout(() => {
  k.Summoner.get({ name: 'sktt1peanut', region: 'kr' }, counter)
  k.Summoner.get({ name: 'sktt1peanut', region: 'kr' }, counter)
}, 9000)
// 20 seconds
// because the first na request will process immediately
// and cause the second na request to sleep for 10 seconds
// a timeout is then set to stall out the batch of kr requests.
// however, it's only for 9 seconds
// which means that that request will process immediately
// and set the next kr request to wait for 10 more seconds
// immediately 1 second after, the second na request finishes
// and then 10 seconds later the second kr request finishes :)

However, users can set their own method rate limits if they want. This is useful in the case that Riot updates the rate limits, and that I'm MIA.

const methodTypes = KindredAPI.METHOD_TYPES

const k = new KindredAPI.Kindred({
  key: 'fakeKey',
  defaultRegion: KindredAPI.REGIONS.NORTH_AMERICA,
  limits: [[500, 10], [30000, 600]], // basic prod key
  debug: true,
  // showKey: true,
  //showDebug: true,
  retryOptions: {
    auto: false, // true by default
    numberOfRetriesBeforeBreak: 3 // infinite by default
  },
  timeout: 1000,
  showHeaders: true,
  cache: new KindredAPI.InMemoryCache(),
  // cacheTTL default if not passed in and cache is passed in
  methodLimits: {
    [methodTypes.GET_SUMMONER_BY_NAME]: 3, // limits this endpoint to 3 requests per 10s
    [methodTypes.GET_MATCH]: 1000 // limits this endpoint to 1000 requests per 10s
  }
})

There are currently two forms of rate limiter (both quite primitive).

  1. Burst
  2. Spread

Make sure to check out some simple benchmarks here.

Burst

Burst is the default rate limiter.

So for example, with a rate limit of 10r/10s, say you want to send 35 requests almost all at the same time:

  1. Request #1 is accepted, and your rate limit starts.

  2. Requests #2-10 are accepted in almost the same second.

  3. You are now rate limited until 10 seconds from Request #1.

  4. Rate limit is now lifted.

  5. Same as above (#+10).

  6. Same as above (#+10).

  7. Rate limit now lifted, and the Requests #31-35 are processed in like 1 second and we're done.

Which should net us an execution time of around 30000ms + code execution in ms.

You can test out the rate limiter (and see that it supports simultaneous requests to multiple regions) with the following code:

var num = 45 // # of requests

function count(err, data) {
  if (data) --num
  if (err) console.error(err)
  if (num == 0) console.timeEnd('api')
}

console.time('api')
for (var i = 0; i < 15; ++i) {
  k.Champion.list('na', count)
  k.Champion.list('kr', count)
  k.Champion.list('euw', count)
}

This should output something like api: 11820.972ms.

var num = 300 // # of requests

function count(err, data) {
  if (data) --num
  if (err) console.error(err)
  if (num == 0) console.timeEnd('api')
}

console.time('api')
for (var i = 0; i < 100; ++i) {
  k.Champion.list('na', count)
  k.Champion.list('kr', count)
  k.Champion.list('euw', count)
}

This should output something like api: 100186.515ms.

To test that it works with retry headers, just run the program while sending a few requests from your browser to intentionally rate limit yourself.

Because of these lines, if (data) --num and if (num == 0) console.timeEnd('api'), you can tell if all your requests went through.

Spread

To initialize a spread rate limiter, initialize Kindred through the standard way, but add spread: true to the config object.

var KindredAPI = require('kindred-api')

var RIOT_API_KEY = 'whatever'
var REGIONS = KindredAPI.REGIONS
var LIMITS = KindredAPI.LIMITS
var CACHE_TYPES = KindredAPI.CACHE_TYPES

var k = new KindredAPI.Kindred({
  key: RIOT_API_KEY,
  defaultRegion: REGIONS.NORTH_AMERICA,
  debug: true,
  limits: LIMITS.OLD_DEV,
  spread: true, // this!
  cacheOptions: CACHE_TYPES[0]
})

Since spreading out requests naturally mean requests fill up the window more tightly, the execution time should be longer. Right now, I spread the requests by basically adding a rate limiter per 1~s (it's not actually 1s).

So if you are using a DEV key, you'll make 1 request at almost a rate of 1s. If you are using a PROD key, you'll make 50 requests at almost a rate of 1s.

var num = 45 // # of requests

function count(err, data) {
  if (data) --num
  if (err) console.error(err)
  if (num == 0) console.timeEnd('api')
}

console.time('api')
for (var i = 0; i < 15; ++i) {
  k.Champion.list('na', count)
  k.Champion.list('kr', count)
  k.Champion.list('euw', count)
}

This should output something like api: 15779.552ms, unlike in the Burst example where it is 11820.972ms. The final 5 requests were spread over the last extra 3-4 seconds.

Note, if you sent the maximum number of requests (20 instead of 15), you would be at around api: 20000ms naturally.

The next burst example should output something like api: 109209.904ms. There's an extra 9 seconds here, but I'm pretty sure this is because of code execution time and faulty math. Nonetheless, the requests are still spread out.