Jump to solution
Verify

The Fix

Fixes a memory consumption issue where IOBasePayload and TextIOPayload would read entire files into memory when streaming large files. The fix ensures that both payload classes never read more than READ_SIZE (64KB) at a time, preventing out-of-memory errors.

Based on closed aio-libs/aiohttp issue #11138 · PR/commit linked

Jump to Verify Open PR/Commit
@@ -0,0 +1,3 @@ @@ -0,0 +1,3 @@ +Fixed ``IOBasePayload`` and ``TextIOPayload`` reading entire files into memory when streaming large files -- by :user:`bdraco`. + +When using file-like objects with the aiohttp client, the entire file would be read into memory if the file size was provided in the ``Content-Length`` header. This could cause out-of-memory errors when uploading large files. The payload classes now correctly read data in chunks of ``READ_SIZE`` (64KB) regardless of the total content length.
repro.py
import asyncio from aiohttp import web async def hello(request): data = await request.read() print(len(data)) return web.Response(text="Hello, world") app = web.Application(client_max_size=2**40) app.add_routes([web.put('/', hello)]) web.run_app(app)
verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
fix.md
Option A — Apply the official fix\nFixes a memory consumption issue where IOBasePayload and TextIOPayload would read entire files into memory when streaming large files. The fix ensures that both payload classes never read more than READ_SIZE (64KB) at a time, preventing out-of-memory errors.\nWhen NOT to use: This fix should not be used if the application relies on reading entire files into memory for processing.\n\n

Why This Fix Works in Production

  • Trigger: After upgrading aiohttp from 3.11.18 to 3.12.7, we've started seeing some use cases OoM. After investigation, it looks like aiohttp started reading files in…
  • Mechanism: The IOBasePayload and TextIOPayload classes read entire files into memory instead of chunking when streaming large files
Production impact:
  • If left unfixed, this can cause silent data inconsistencies that propagate (bad cache entries, incorrect downstream decisions).

Why This Breaks in Prod

  • Shows up under Python 3.11.11 in real deployments (not just unit tests).
  • The IOBasePayload and TextIOPayload classes read entire files into memory instead of chunking when streaming large files
  • Production symptom (often without a traceback): After upgrading aiohttp from 3.11.18 to 3.12.7, we've started seeing some use cases OoM. After investigation, it looks like aiohttp started reading files in memory when using `data=fd`.

Proof / Evidence

Discussion

High-signal excerpts from the issue thread (symptoms, repros, edge-cases).

“TextIOPayload actually has the same issue”
@bdraco · 2025-06-04 · source
“Hi, I was about to report this bug too”
@blesner · 2025-06-04 · source

Failure Signature (Search String)

  • After upgrading aiohttp from 3.11.18 to 3.12.7, we've started seeing some use cases OoM. After investigation, it looks like aiohttp started reading files in memory when using
  • Date: Thu May 22 10:22:46 2025 -0500
Copy-friendly signature
signature.txt
Failure Signature ----------------- After upgrading aiohttp from 3.11.18 to 3.12.7, we've started seeing some use cases OoM. After investigation, it looks like aiohttp started reading files in memory when using `data=fd`. Date: Thu May 22 10:22:46 2025 -0500

Error Message

Signature-only (no traceback captured)
error.txt
Error Message ------------- After upgrading aiohttp from 3.11.18 to 3.12.7, we've started seeing some use cases OoM. After investigation, it looks like aiohttp started reading files in memory when using `data=fd`. Date: Thu May 22 10:22:46 2025 -0500

Minimal Reproduction

repro.py
import asyncio from aiohttp import web async def hello(request): data = await request.read() print(len(data)) return web.Response(text="Hello, world") app = web.Application(client_max_size=2**40) app.add_routes([web.put('/', hello)]) web.run_app(app)

Environment

  • Python: 3.11.11

What Broke

Out of memory errors occurred when uploading large files due to improper file handling.

Why It Broke

The IOBasePayload and TextIOPayload classes read entire files into memory instead of chunking when streaming large files

Fix Options (Details)

Option A — Apply the official fix

Fixes a memory consumption issue where IOBasePayload and TextIOPayload would read entire files into memory when streaming large files. The fix ensures that both payload classes never read more than READ_SIZE (64KB) at a time, preventing out-of-memory errors.

When NOT to use: This fix should not be used if the application relies on reading entire files into memory for processing.

Fix reference: https://github.com/aio-libs/aiohttp/pull/11139

Last verified: 2026-02-09. Validate in your environment.

Get updates

We publish verified fixes weekly. No spam.

Subscribe

When NOT to Use This Fix

  • This fix should not be used if the application relies on reading entire files into memory for processing.

Verify Fix

verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.

Did This Fix Work in Your Case?

Quick signal helps us prioritize which fixes to verify and improve.

Prevention

  • Track RSS + object counts after deployments; alert on monotonic growth and GC pressure.
  • Add a long-running test that repeats the failing call path and asserts stable memory.

Related Issues

No related fixes found.

Sources

We don’t republish the full GitHub discussion text. Use the links above for context.