Jump to solution
Verify

The Fix

Optimizes the connection timeout handling by avoiding unnecessary timeout creation when a connection is immediately available for reuse.

Based on closed aio-libs/aiohttp issue #9598 · PR/commit linked

Production note: Most teams hit this during upgrades or environment changes. Roll out with a canary and smoke critical endpoints (health, OpenAPI/docs) before 100%.

Jump to Verify Open PR/Commit
@@ -0,0 +1,3 @@ @@ -0,0 +1,3 @@ +Improved performance of the connector when a connection can be reused -- by :user:`bdraco`. + +If ``BaseConnector.connect`` has been subclassed and replaced with custom logic, the ``ceil_timeout`` must be added.
repro.py
import asyncio import sys import time from typing import Any, Coroutine, Iterator import matplotlib.pyplot as plt import uvloop import aiohttp from aiohttp import web asyncio.set_event_loop_policy(uvloop.EventLoopPolicy()) PORT = 8082 URL = f"http://localhost:{PORT}/req" RESP = "a" * 2000 REQUESTS = 10000 CONCURRENCY = 20 def run_web_server(): async def handle(_request): return web.Response(text=RESP) app = web.Application() app.add_routes([web.get("/req", handle)]) web.run_app(app, host="localhost", port=PORT) def duration(start: float) -> int: return int((time.monotonic() - start) * 1000) async def run_requests(axis: plt.Axes): async def gather_limited_concurrency(coros: Iterator[Coroutine[Any, Any, Any]]): sem = asyncio.Semaphore(CONCURRENCY) async def coro_with_sem(coro): async with sem: return await coro return await asyncio.gather(*(coro_with_sem(c) for c in coros)) async def aiohttp_get(session: aiohttp.ClientSession, timings: list[int]): start = time.monotonic() async with session.request("GET", URL) as res: assert len(await res.read()) == len(RESP) assert res.status == 200, f"status={res.status}" timings.append(duration(start)) async with aiohttp.ClientSession() as session: # warmup await asyncio.gather(*(aiohttp_get(session, []) for _ in range(REQUESTS))) timings = [] start = time.monotonic() await gather_limited_concurrency( aiohttp_get(session, timings) for _ in range(REQUESTS) ) axis.plot( [*range(REQUESTS)], timings, label=f"aiohttp (tot={duration(start)}ms)" ) def main(mode: str): assert mode in {"server", "client"}, f"invalid mode: {mode}" if mode == "server": run_web_server() else: fig, ax = plt.subplots() asyncio.run(run_requests(ax)) plt.legend(loc="upper left") ax.set_xlabel("# request") ax.set_ylabel("[ms]") plt.show() print("DONE", flush=True) if __name__ == "__main__": assert len(sys.argv) == 2, f"Usage: {sys.argv[0]} server|client" main(sys.argv[1])
verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
fix.md
Option A — Apply the official fix\nOptimizes the connection timeout handling by avoiding unnecessary timeout creation when a connection is immediately available for reuse.\nWhen NOT to use: This fix is not applicable if a custom connect function is implemented that does not handle timeouts.\n\n

Why This Fix Works in Production

  • Trigger: Explore moving the connect ceil_timeout in the client inside the Connector connect
  • Mechanism: The connection timeout was unnecessarily created even when a connection was immediately available for reuse
Production impact:
  • If left unfixed, the same config can fail only in production (env differences), causing startup failures or partial feature outages.

Why This Breaks in Prod

  • The connection timeout was unnecessarily created even when a connection was immediately available for reuse
  • Production symptom (often without a traceback): Explore moving the connect ceil_timeout in the client inside the Connector connect

Proof / Evidence

Discussion

High-signal excerpts from the issue thread (symptoms, repros, edge-cases).

“Avoiding the timeout saves ~13.5% of the request time. Simple test was to remove async with ceil_timeout and re-run the client benchmark”
@bdraco · 2024-10-31 · source
“Here is the benchmark script. Its cobbled together and iterated on from many different places so its a bit messy, but good enough for a…”
@bdraco · 2024-10-31 · source
“I think its possible someone might subclass connect and call super() but it seems very unlikely they would reimplement connect given how many internals we…”
@bdraco · 2024-10-31 · source

Failure Signature (Search String)

  • Explore moving the connect ceil_timeout in the client inside the Connector connect
  • Avoiding the timeout saves ~13.5% of the request time.
Copy-friendly signature
signature.txt
Failure Signature ----------------- Explore moving the connect ceil_timeout in the client inside the Connector connect Avoiding the timeout saves ~13.5% of the request time.

Error Message

Signature-only (no traceback captured)
error.txt
Error Message ------------- Explore moving the connect ceil_timeout in the client inside the Connector connect Avoiding the timeout saves ~13.5% of the request time.

Minimal Reproduction

repro.py
import asyncio import sys import time from typing import Any, Coroutine, Iterator import matplotlib.pyplot as plt import uvloop import aiohttp from aiohttp import web asyncio.set_event_loop_policy(uvloop.EventLoopPolicy()) PORT = 8082 URL = f"http://localhost:{PORT}/req" RESP = "a" * 2000 REQUESTS = 10000 CONCURRENCY = 20 def run_web_server(): async def handle(_request): return web.Response(text=RESP) app = web.Application() app.add_routes([web.get("/req", handle)]) web.run_app(app, host="localhost", port=PORT) def duration(start: float) -> int: return int((time.monotonic() - start) * 1000) async def run_requests(axis: plt.Axes): async def gather_limited_concurrency(coros: Iterator[Coroutine[Any, Any, Any]]): sem = asyncio.Semaphore(CONCURRENCY) async def coro_with_sem(coro): async with sem: return await coro return await asyncio.gather(*(coro_with_sem(c) for c in coros)) async def aiohttp_get(session: aiohttp.ClientSession, timings: list[int]): start = time.monotonic() async with session.request("GET", URL) as res: assert len(await res.read()) == len(RESP) assert res.status == 200, f"status={res.status}" timings.append(duration(start)) async with aiohttp.ClientSession() as session: # warmup await asyncio.gather(*(aiohttp_get(session, []) for _ in range(REQUESTS))) timings = [] start = time.monotonic() await gather_limited_concurrency( aiohttp_get(session, timings) for _ in range(REQUESTS) ) axis.plot( [*range(REQUESTS)], timings, label=f"aiohttp (tot={duration(start)}ms)" ) def main(mode: str): assert mode in {"server", "client"}, f"invalid mode: {mode}" if mode == "server": run_web_server() else: fig, ax = plt.subplots() asyncio.run(run_requests(ax)) plt.legend(loc="upper left") ax.set_xlabel("# request") ax.set_ylabel("[ms]") plt.show() print("DONE", flush=True) if __name__ == "__main__": assert len(sys.argv) == 2, f"Usage: {sys.argv[0]} server|client" main(sys.argv[1])

What Broke

Increased request time due to unnecessary timeout handling during connection reuse.

Why It Broke

The connection timeout was unnecessarily created even when a connection was immediately available for reuse

Fix Options (Details)

Option A — Apply the official fix

Optimizes the connection timeout handling by avoiding unnecessary timeout creation when a connection is immediately available for reuse.

When NOT to use: This fix is not applicable if a custom connect function is implemented that does not handle timeouts.

Fix reference: https://github.com/aio-libs/aiohttp/pull/9600

Last verified: 2026-02-09. Validate in your environment.

Get updates

We publish verified fixes weekly. No spam.

Subscribe

When NOT to Use This Fix

  • This fix is not applicable if a custom connect function is implemented that does not handle timeouts.

Verify Fix

verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.

Did This Fix Work in Your Case?

Quick signal helps us prioritize which fixes to verify and improve.

Prevention

  • Make timeouts explicit and test them (unit + integration) to avoid silent behavior changes.
  • Instrument retries (attempt count + reason) and alert on spikes to catch dependency slowdowns.

Version Compatibility Table

VersionStatus
1.1 Broken

Related Issues

No related fixes found.

Sources

We don’t republish the full GitHub discussion text. Use the links above for context.