The Fix
Fixes two critical issues in aiohttp's HTTP payload handling that prevented connection reuse, including buffer truncation and a timing issue in the write_bytes function.
Based on closed aio-libs/aiohttp issue #10325 · PR/commit linked
Production note: This tends to surface only under concurrency. Reproduce with load tests and watch for lock contention/cancellation paths.
@@ -0,0 +1 @@
@@ -0,0 +1 @@
+10915.bugfix.rst
\ No newline at end of file
diff --git a/CHANGES/10915.bugfix.rst b/CHANGES/10915.bugfix.rst
import asyncio
import io
import aiohttp
import aiohttp.web
async def hello(_request: aiohttp.web.Request) -> aiohttp.web.Response:
return aiohttp.web.Response(body=b"")
async def main():
app = aiohttp.web.Application()
app.router.add_post("/hello", hello)
server_task = asyncio.create_task(aiohttp.web._run_app(app, port=3003))
async with aiohttp.ClientSession() as session:
async with session.post(
"http://127.0.0.1:3003/hello",
data=b"",
headers={"Content-Length": "0"},
) as response:
response.raise_for_status()
assert len(session._connector._conns) == 1, session._connector._conns
async with aiohttp.ClientSession() as session:
async with session.post(
"http://127.0.0.1:3003/hello",
data=io.BytesIO(),
headers={"Content-Length": "0"},
) as response:
response.raise_for_status()
assert (
len(session._connector._conns) == 1 # fails here
), session._connector._conns
server_task.cancel()
await server_task
if __name__ == "__main__":
asyncio.run(main())
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Option A — Apply the official fix\nFixes two critical issues in aiohttp's HTTP payload handling that prevented connection reuse, including buffer truncation and a timing issue in the write_bytes function.\nWhen NOT to use: This fix should not be used if the payload is expected to be larger than the content length.\n\n
Why This Fix Works in Production
- Trigger: AssertionError: defaultdict(<class 'collections.deque'>, {})
- Mechanism: The connection is closed prematurely when sending a 0-byte payload, preventing reuse
- If left unfixed, failures can be intermittent under concurrency (hard to reproduce; shows up as sporadic 5xx/timeouts).
Why This Breaks in Prod
- Shows up under Python 3.11.1 in real deployments (not just unit tests).
- The connection is closed prematurely when sending a 0-byte payload, preventing reuse
- Surfaces as: ======== Running on http://0.0.0.0:3003 ========
Proof / Evidence
- GitHub issue: #10325
- Fix PR: https://github.com/aio-libs/aiohttp/pull/10915
- Reproduced locally: No (not executed)
- Last verified: 2026-02-11
- Confidence: 0.70
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.45
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“I guess if we're already closing, then we should consider the task complete. So, fix is probably just: Or similar.”
“Reproduces on py3.9, py3.10, py3.11, and py3.12 on linux, with very minor tweaking to pass linting and to avoid racing the server start uploaded the…”
“This example seems to reproduce it every time”
“I tried to write a failing test for this in https://github.com/aio-libs/aiohttp/pull/10326 ... The behavior is a bit strange as either I've got something wrong, or…”
Failure Signature (Search String)
- AssertionError: defaultdict(<class 'collections.deque'>, {})
Error Message
Stack trace
Error Message
-------------
======== Running on http://0.0.0.0:3003 ========
(Press CTRL+C to quit)
Traceback (most recent call last):
File "src/main.py", line 43, in <module>
asyncio.run(main())
File "Python311\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "Python311\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "src\main.py", line 35, in main
len(session._connector._conns) == 1 # fails here
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: defaultdict(<class 'collections.deque'>, {})
Minimal Reproduction
import asyncio
import io
import aiohttp
import aiohttp.web
async def hello(_request: aiohttp.web.Request) -> aiohttp.web.Response:
return aiohttp.web.Response(body=b"")
async def main():
app = aiohttp.web.Application()
app.router.add_post("/hello", hello)
server_task = asyncio.create_task(aiohttp.web._run_app(app, port=3003))
async with aiohttp.ClientSession() as session:
async with session.post(
"http://127.0.0.1:3003/hello",
data=b"",
headers={"Content-Length": "0"},
) as response:
response.raise_for_status()
assert len(session._connector._conns) == 1, session._connector._conns
async with aiohttp.ClientSession() as session:
async with session.post(
"http://127.0.0.1:3003/hello",
data=io.BytesIO(),
headers={"Content-Length": "0"},
) as response:
response.raise_for_status()
assert (
len(session._connector._conns) == 1 # fails here
), session._connector._conns
server_task.cancel()
await server_task
if __name__ == "__main__":
asyncio.run(main())
Environment
- Python: 3.11.1
What Broke
Connections are not reused, leading to increased latency and resource consumption.
Why It Broke
The connection is closed prematurely when sending a 0-byte payload, preventing reuse
Fix Options (Details)
Option A — Apply the official fix
Fixes two critical issues in aiohttp's HTTP payload handling that prevented connection reuse, including buffer truncation and a timing issue in the write_bytes function.
Option D — Guard side-effects with OnceOnly Guardrail for side-effects
Mitigate duplicate external side-effects under retries/timeouts/agent loops by gating the operation before calling external systems.
- Place OnceOnly between your code/agent and real side-effects (Stripe, emails, CRM, APIs).
- Use a stable key per side-effect (e.g., customer_id + action + idempotency_key).
- Fail-safe: configure fail-open vs fail-closed based on blast radius and spend risk.
Show example snippet (optional)
from onceonly import OnceOnly
import os
once = OnceOnly(api_key=os.environ["ONCEONLY_API_KEY"], fail_open=True)
# Stable idempotency key per real side-effect.
# Use a request id / job id / webhook delivery id / Stripe event id, etc.
event_id = "evt_..." # replace
key = f"stripe:webhook:{event_id}"
res = once.check_lock(key=key, ttl=3600)
if res.duplicate:
return {"status": "already_processed"}
# Safe to execute the side-effect exactly once.
handle_event(event_id)
Fix reference: https://github.com/aio-libs/aiohttp/pull/10915
Last verified: 2026-02-11. Validate in your environment.
When NOT to Use This Fix
- This fix should not be used if the payload is expected to be larger than the content length.
- Do not use this to hide logic bugs or data corruption. Use it to block duplicate external side-effects and enforce tool permissions/spend caps.
Verify Fix
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Add a stress test that runs high-concurrency workloads and fails on thread dumps / blocked locks.
- Enable watchdog dumps in prod (faulthandler, thread dump endpoint) to capture deadlocks quickly.
- Make timeouts explicit and test them (unit + integration) to avoid silent behavior changes.
- Instrument retries (attempt count + reason) and alert on spikes to catch dependency slowdowns.
Version Compatibility Table
| Version | Status |
|---|---|
| 3.12 | Broken |
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.