FastAPI UploadFile is slow compared to Flask

You can write the file(s) using synchronous writing , after defining the endpoint with def, as shown in this answer, or using asynchronous writing (utilising aiofiles), after defining the endpoint with async def; UploadFile methods are async methods, and thus, you need to await them. Example is given below. For more details on def vs async def and how they may affect your API’s performance (depending on the tasks performed inside the endpoints), please have a look at this answer.

Upload Single File

app.py

from fastapi import File, UploadFile
import aiofiles

@app.post("/upload")
async def upload(file: UploadFile = File(...)):
    try:
        contents = await file.read()
        async with aiofiles.open(file.filename, 'wb') as f:
            await f.write(contents)
    except Exception:
        return {"message": "There was an error uploading the file"}
    finally:
        await file.close()

    return {"message": f"Successfuly uploaded {file.filename}"}
Read the File in chunks

Or, you can use async in the chunked manner, to avoid loading the entire file into memory. If, for example, you have 8GB of RAM, you can’t load a 50GB file (not to mention that the available RAM will always be less than the total amount that is installed, as the native OS and other applications running on your machine will use some of the RAM). Hence, in that case, you should rather load the file into memory in chunks and process the data one chunk at a time. This method, however, may take longer to complete, depending on the chunk size you choose; below, that is 1024 * 1024 bytes (= 1MB). You can adjust the chunk size as desired.

from fastapi import File, UploadFile
import aiofiles

@app.post("/upload")
async def upload(file: UploadFile = File(...)):
    try:
        async with aiofiles.open(file.filename, 'wb') as f:
            while contents := await file.read(1024 * 1024):
                await f.write(contents)
    except Exception:
        return {"message": "There was an error uploading the file"}
    finally:
        await file.close()

    return {"message": f"Successfuly uploaded {file.filename}"}

Alternatively, you could use shutil.copyfileobj(), which is used to copy the contents of a file-like object to another file-like object (see this answer as well). By default the data is read in chunks with the default buffer (chunk) size being 1MB (i.e., 1024 * 1024 bytes) for Windows and 64KB for other platforms (see source code here). You can specify the buffer size by passing the optional length parameter. Note: If negative length value is passed, the entire contents of the file will be read—see f.read() documentation as well, which .copyfileobj() uses under the hood. The source code of .copyfileobj() can be found here—there isn’t really anything that different from the previous approach in reading/writing the file contents. However, .copyfileobj() uses blocking I/O operations behind the scenes, and this would result in blocking the entire server (if used inside an async def endpoint). Thus, to avoid that , you could use Starlette’s run_in_threadpool() to run all the needed functions in a separate thread (that is then awaited) to ensure that the main thread (where coroutines are run) does not get blocked. The same exact function is used by FastAPI internally when you call the async methods of the UploadFile object, i.e., .write(), .read(), .close(), etc.—see source code here. Example:

from fastapi import File, UploadFile
from fastapi.concurrency import run_in_threadpool
import shutil
        
@app.post("/upload")
async def upload(file: UploadFile = File(...)):
    try:
        f = await run_in_threadpool(open, file.filename, 'wb')
        await run_in_threadpool(shutil.copyfileobj, file.file, f)
    except Exception:
        return {"message": "There was an error uploading the file"}
    finally:
        if 'f' in locals(): await run_in_threadpool(f.close)
        await file.close()

    return {"message": f"Successfuly uploaded {file.filename}"}

test.py

import requests

url="http://127.0.0.1:8000/upload"
file = {'file': open('images/1.png', 'rb')}
resp = requests.post(url=url, files=file) 
print(resp.json())

Upload Multiple Files

app.py

from fastapi import File, UploadFile
import aiofiles

@app.post("/upload")
async def upload(files: List[UploadFile] = File(...)):
    for file in files:
        try:
            contents = await file.read()
            async with aiofiles.open(file.filename, 'wb') as f:
                await f.write(contents)
        except Exception:
            return {"message": "There was an error uploading the file(s)"}
        finally:
            await file.close()

    return {"message": f"Successfuly uploaded {[file.filename for file in files]}"}  
Read the Files in chunks

To read the file(s) in chunks instead, see the approaches described earlier in this answer.

test.py

import requests

url="http://127.0.0.1:8000/upload"
files = [('files', open('images/1.png', 'rb')), ('files', open('images/2.png', 'rb'))]
resp = requests.post(url=url, files=files) 
print(resp.json())

Update

Digging into the source code, it seems that the latest versions of Starlette (which FastAPI uses underneath) use a SpooledTemporaryFile (for UploadFile data structure) with max_size attribute set to 1MB (1024 * 1024 bytes) – see here – in contrast to older versions where max_size was set to the default value, i.e., 0 bytes, such as the one here.

The above means, in the past, data used to be fully loaded into memory no matter the size of file (which could lead to issues when a file couldn’t fit into RAM), whereas, in the latest version, data is spooled in memory until the file size exceeds max_size (i.e., 1MB), at which point the contents are written to disk; more specifically, to the OS’s temporary directory (Note: this also means that the maximum size of file you can upload is bound by the storage available to the system’s temporary directory.. If enough storage (for your needs) is available on your system, there’s nothing to worry about; otherwise, please have a look at this answer on how to change the default temporary directory). Thus, the process of writing the file multiple times—that is, initially loading the data into RAM, then, if the data exceeds 1MB in size, writing the file to temporary directory, then reading the file from temporary directory (using file.read()) and finally, writing the file to a permanent directory—is what makes uploading file slow compared to using Flask framework, as OP noted in their question (though, the difference in time is not that big, but just a few seconds, depending on the size of file).

Solution

The solution (if one needs to upload files quite larger than 1MB and uploading time is important to them) would be to access the request body as a stream. As per Starlette documentation, if you access .stream(), then the byte chunks are provided without storing the entire body to memory (and later to temporary directory, if the body contains file data that exceeds 1MB). Example is given below, where time of uploading is recorded on client side, and which ends up being the same as when using Flask framework with the example given in OP’s question.

app.py

from fastapi import Request
import aiofiles

@app.post('/upload')
async def upload(request: Request):
    try:
        filename = request.headers['filename']
        async with aiofiles.open(filename, 'wb') as f:
            async for chunk in request.stream():
                await f.write(chunk)
    except Exception:
        return {"message": "There was an error uploading the file"}
     
    return {"message": f"Successfuly uploaded {filename}"}

In case your application does not require saving the file to disk, and all you need is the file to be loaded directly into memory, you can just use the below (make sure your RAM has enough space available to accommodate the accumulated data):

from fastapi import Request

@app.post('/upload')
async def upload(request: Request):
    body = b''
    try:
        filename = request.headers['filename']
        async for chunk in request.stream():
            body += chunk
    except Exception:
        return {"message": "There was an error uploading the file"}
    
    #print(body.decode())
    return {"message": f"Successfuly uploaded {filename}"}

test.py

import requests
import time

with open("images/1.png", "rb") as f:
    data = f.read()
   
url="http://127.0.0.1:8000/upload"
headers = {'filename': '1.png'}

start = time.time()
resp = requests.post(url=url, data=data, headers=headers)
end = time.time() - start

print(f'Elapsed time is {end} seconds.', '\n')
print(resp.json())

For more details and code examples (on uploading multiple Files and Form/JSON data) using the approach above, please have a look at this answer.

Leave a Comment