aiohttp: rate limiting parallel requests

If I understand you well, you want to limit the number of simultaneous requests?

There is a object inside asyncio named Semaphore, it works like an asynchronous RLock.

semaphore = asyncio.Semaphore(50)
#...
async def limit_wrap(url):
    async with semaphore:
        # do what you want
#...
results = asyncio.gather([limit_wrap(url) for url in urls])

updated

Suppose I make 50 concurrent requests, and they all finish in 2 seconds. So, it doesn’t touch the limitation(only 25 requests per seconds).

That means I should make 100 concurrent requests, and they all finish in 2 seconds too(50 requests per seconds). But before you actually make those requests, how could you determine how long will they finish?

Or if you doesn’t mind finished requests per second but requests made per second. You can:

async def loop_wrap(urls):
    for url in urls:
        asyncio.ensure_future(download(url))
        await asyncio.sleep(1/50)

asyncio.ensure_future(loop_wrap(urls))
loop.run_forever()

The code above will create a Future instance every 1/50 second.

Leave a Comment