Jump to solution
Verify

The Fix

Fixes two critical issues in aiohttp's HTTP payload handling that prevented connection reuse, including buffer truncation and a timing issue in the write_bytes function.

Based on closed aio-libs/aiohttp issue #10325 · PR/commit linked

Production note: This tends to surface only under concurrency. Reproduce with load tests and watch for lock contention/cancellation paths.

Jump to Verify Open PR/Commit
@@ -0,0 +1 @@ @@ -0,0 +1 @@ +10915.bugfix.rst \ No newline at end of file diff --git a/CHANGES/10915.bugfix.rst b/CHANGES/10915.bugfix.rst
repro.py
import asyncio import io import aiohttp import aiohttp.web async def hello(_request: aiohttp.web.Request) -> aiohttp.web.Response: return aiohttp.web.Response(body=b"") async def main(): app = aiohttp.web.Application() app.router.add_post("/hello", hello) server_task = asyncio.create_task(aiohttp.web._run_app(app, port=3003)) async with aiohttp.ClientSession() as session: async with session.post( "http://127.0.0.1:3003/hello", data=b"", headers={"Content-Length": "0"}, ) as response: response.raise_for_status() assert len(session._connector._conns) == 1, session._connector._conns async with aiohttp.ClientSession() as session: async with session.post( "http://127.0.0.1:3003/hello", data=io.BytesIO(), headers={"Content-Length": "0"}, ) as response: response.raise_for_status() assert ( len(session._connector._conns) == 1 # fails here ), session._connector._conns server_task.cancel() await server_task if __name__ == "__main__": asyncio.run(main())
verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
fix.md
Option A — Apply the official fix\nFixes two critical issues in aiohttp's HTTP payload handling that prevented connection reuse, including buffer truncation and a timing issue in the write_bytes function.\nWhen NOT to use: This fix should not be used if the payload is expected to be larger than the content length.\n\n

Why This Fix Works in Production

  • Trigger: AssertionError: defaultdict(<class 'collections.deque'>, {})
  • Mechanism: The connection is closed prematurely when sending a 0-byte payload, preventing reuse
Production impact:
  • If left unfixed, failures can be intermittent under concurrency (hard to reproduce; shows up as sporadic 5xx/timeouts).

Why This Breaks in Prod

  • Shows up under Python 3.11.1 in real deployments (not just unit tests).
  • The connection is closed prematurely when sending a 0-byte payload, preventing reuse
  • Surfaces as: ======== Running on http://0.0.0.0:3003 ========

Proof / Evidence

Discussion

High-signal excerpts from the issue thread (symptoms, repros, edge-cases).

“I guess if we're already closing, then we should consider the task complete. So, fix is probably just: Or similar.”
@Dreamsorcerer · 2025-01-15 · confirmation · source
“Reproduces on py3.9, py3.10, py3.11, and py3.12 on linux, with very minor tweaking to pass linting and to avoid racing the server start uploaded the…”
@Tjstretchalot · 2025-01-14 · repro detail · source
“This example seems to reproduce it every time”
@Tjstretchalot · 2025-01-14 · source
“I tried to write a failing test for this in https://github.com/aio-libs/aiohttp/pull/10326 ... The behavior is a bit strange as either I've got something wrong, or…”
@bdraco · 2025-01-14 · source

Failure Signature (Search String)

  • AssertionError: defaultdict(<class 'collections.deque'>, {})

Error Message

Stack trace
error.txt
Error Message ------------- ======== Running on http://0.0.0.0:3003 ======== (Press CTRL+C to quit) Traceback (most recent call last): File "src/main.py", line 43, in <module> asyncio.run(main()) File "Python311\Lib\asyncio\runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "Python311\Lib\asyncio\runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "src\main.py", line 35, in main len(session._connector._conns) == 1 # fails here ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: defaultdict(<class 'collections.deque'>, {})

Minimal Reproduction

repro.py
import asyncio import io import aiohttp import aiohttp.web async def hello(_request: aiohttp.web.Request) -> aiohttp.web.Response: return aiohttp.web.Response(body=b"") async def main(): app = aiohttp.web.Application() app.router.add_post("/hello", hello) server_task = asyncio.create_task(aiohttp.web._run_app(app, port=3003)) async with aiohttp.ClientSession() as session: async with session.post( "http://127.0.0.1:3003/hello", data=b"", headers={"Content-Length": "0"}, ) as response: response.raise_for_status() assert len(session._connector._conns) == 1, session._connector._conns async with aiohttp.ClientSession() as session: async with session.post( "http://127.0.0.1:3003/hello", data=io.BytesIO(), headers={"Content-Length": "0"}, ) as response: response.raise_for_status() assert ( len(session._connector._conns) == 1 # fails here ), session._connector._conns server_task.cancel() await server_task if __name__ == "__main__": asyncio.run(main())

Environment

  • Python: 3.11.1

What Broke

Connections are not reused, leading to increased latency and resource consumption.

Why It Broke

The connection is closed prematurely when sending a 0-byte payload, preventing reuse

Fix Options (Details)

Option A — Apply the official fix

Fixes two critical issues in aiohttp's HTTP payload handling that prevented connection reuse, including buffer truncation and a timing issue in the write_bytes function.

When NOT to use: This fix should not be used if the payload is expected to be larger than the content length.

Option D — Guard side-effects with OnceOnly Guardrail for side-effects

Mitigate duplicate external side-effects under retries/timeouts/agent loops by gating the operation before calling external systems.

  • Place OnceOnly between your code/agent and real side-effects (Stripe, emails, CRM, APIs).
  • Use a stable key per side-effect (e.g., customer_id + action + idempotency_key).
  • Fail-safe: configure fail-open vs fail-closed based on blast radius and spend risk.
Show example snippet (optional)
onceonly.py
from onceonly import OnceOnly import os once = OnceOnly(api_key=os.environ["ONCEONLY_API_KEY"], fail_open=True) # Stable idempotency key per real side-effect. # Use a request id / job id / webhook delivery id / Stripe event id, etc. event_id = "evt_..." # replace key = f"stripe:webhook:{event_id}" res = once.check_lock(key=key, ttl=3600) if res.duplicate: return {"status": "already_processed"} # Safe to execute the side-effect exactly once. handle_event(event_id)

See OnceOnly SDK

When NOT to use: Do not use this to hide logic bugs or data corruption. Use it to block duplicate external side-effects and enforce tool permissions/spend caps.

Fix reference: https://github.com/aio-libs/aiohttp/pull/10915

Last verified: 2026-02-11. Validate in your environment.

Get updates

We publish verified fixes weekly. No spam.

Subscribe

When NOT to Use This Fix

  • This fix should not be used if the payload is expected to be larger than the content length.
  • Do not use this to hide logic bugs or data corruption. Use it to block duplicate external side-effects and enforce tool permissions/spend caps.

Verify Fix

verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.

Did This Fix Work in Your Case?

Quick signal helps us prioritize which fixes to verify and improve.

Prevention

  • Add a stress test that runs high-concurrency workloads and fails on thread dumps / blocked locks.
  • Enable watchdog dumps in prod (faulthandler, thread dump endpoint) to capture deadlocks quickly.
  • Make timeouts explicit and test them (unit + integration) to avoid silent behavior changes.
  • Instrument retries (attempt count + reason) and alert on spikes to catch dependency slowdowns.

Version Compatibility Table

VersionStatus
3.12 Broken

Related Issues

No related fixes found.

Sources

We don’t republish the full GitHub discussion text. Use the links above for context.