Jump to solution
Verify

The Fix

Upgrade to version 0.13.0 or later.

Based on closed encode/httpx issue #720 · PR/commit linked

Production note: Most teams hit this during upgrades or environment changes. Roll out with a canary and smoke critical endpoints (health, OpenAPI/docs) before 100%.

Jump to Verify Open PR/Commit
@@ -206,6 +206,7 @@ async def close(self) -> None: self.keepalive_connections.clear() for connection in connections: + self.max_connections.release() await connection.close()
repro.py
import httpx, asyncio, logging from httpx import PoolLimits from random import randint queue = asyncio.Queue() clients = [ httpx.AsyncClient( http2=True, pool_limits=PoolLimits(soft_limit=2, hard_limit=10), cookies={'a': '123456789', 'b': '987654321'}, ) ] async def worker_loop(cid, client, queue): while 1: sub_id = await queue.get() async with client as c: r = await c.get(f'https://mywebsite.dummy/submission.php?id={sub_id}') if r.status_code != 200: print(cid, f'Got status code {r.status_code} while parsing {sub_id}') return async def main(): for i in range(2500): await queue.put(randint(1, 80000000)) for k, v in enumerate(clients): asyncio.create_task(worker_loop(k, v, queue)) while 1: if queue.qsize() == 0: await queue.put(randint(1, 80000000)) await asyncio.sleep(2) loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.stop()
verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
fix.md
Option A — Upgrade to fixed release\nUpgrade to version 0.13.0 or later.\nWhen NOT to use: This fix is not applicable if the connection pool is not managed correctly.\n\n

Why This Fix Works in Production

  • Trigger: Hello. I am having an issue where it looks like connections aren't being closed correctly, and after i reach a number of requests equivalent to "hard_limit" of…
  • Mechanism: Connections were not being released properly when closing the connection pool
  • Why the fix works: Release max_connections for keepalive connections when closing the connection pool, addressing the issue of connections not being released properly. (first fixed release: 0.13.0).
Production impact:
  • If left unfixed, the same config can fail only in production (env differences), causing startup failures or partial feature outages.

Why This Breaks in Prod

  • Connections were not being released properly when closing the connection pool
  • Production symptom (often without a traceback): Hello. I am having an issue where it looks like connections aren't being closed correctly, and after i reach a number of requests equivalent to "hard_limit" of pool_limits, i get a PoolTimeout exception.

Proof / Evidence

  • GitHub issue: #720
  • Fix PR: https://github.com/encode/httpx/pull/721
  • First fixed release: 0.13.0
  • Reproduced locally: No (not executed)
  • Last verified: 2026-02-09
  • Confidence: 0.85
  • Did this fix it?: Yes (upstream fix exists)
  • Own content ratio: 0.52

Discussion

High-signal excerpts from the issue thread (symptoms, repros, edge-cases).

“Oh, I thought instantiating the client opened the connection pool, and you'd need to use "async with" to let the pool know you wanted a…”
@edocod1 · 2020-01-04 · source
“@florimondmanca is this truly fixed? i'm running into something that seems similar, yet i'm using version 0.13.3 which looks to contain the fix”
@zeldrinn · 2020-08-11 · source
“> only one actual connection is opened to the IP address, so pooling seems to work fine I don't think that's actually the behavior you'd…”
@florimondmanca · 2020-01-04 · confirmation · source

Failure Signature (Search String)

  • Hello. I am having an issue where it looks like connections aren't being closed correctly, and after i reach a number of requests equivalent to "hard_limit" of pool_limits, i get
  • for i in range(2500):
Copy-friendly signature
signature.txt
Failure Signature ----------------- Hello. I am having an issue where it looks like connections aren't being closed correctly, and after i reach a number of requests equivalent to "hard_limit" of pool_limits, i get a PoolTimeout exception. for i in range(2500):

Error Message

Signature-only (no traceback captured)
error.txt
Error Message ------------- Hello. I am having an issue where it looks like connections aren't being closed correctly, and after i reach a number of requests equivalent to "hard_limit" of pool_limits, i get a PoolTimeout exception. for i in range(2500):

Minimal Reproduction

repro.py
import httpx, asyncio, logging from httpx import PoolLimits from random import randint queue = asyncio.Queue() clients = [ httpx.AsyncClient( http2=True, pool_limits=PoolLimits(soft_limit=2, hard_limit=10), cookies={'a': '123456789', 'b': '987654321'}, ) ] async def worker_loop(cid, client, queue): while 1: sub_id = await queue.get() async with client as c: r = await c.get(f'https://mywebsite.dummy/submission.php?id={sub_id}') if r.status_code != 200: print(cid, f'Got status code {r.status_code} while parsing {sub_id}') return async def main(): for i in range(2500): await queue.put(randint(1, 80000000)) for k, v in enumerate(clients): asyncio.create_task(worker_loop(k, v, queue)) while 1: if queue.qsize() == 0: await queue.put(randint(1, 80000000)) await asyncio.sleep(2) loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.stop()

What Broke

PoolTimeout exception occurs after reaching the hard limit of pool limits.

Why It Broke

Connections were not being released properly when closing the connection pool

Fix Options (Details)

Option A — Upgrade to fixed release Safe default (recommended)

Upgrade to version 0.13.0 or later.

When NOT to use: This fix is not applicable if the connection pool is not managed correctly.

Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.

Fix reference: https://github.com/encode/httpx/pull/721

First fixed release: 0.13.0

Last verified: 2026-02-09. Validate in your environment.

Get updates

We publish verified fixes weekly. No spam.

Subscribe

When NOT to Use This Fix

  • This fix is not applicable if the connection pool is not managed correctly.

Verify Fix

verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.

Did This Fix Work in Your Case?

Quick signal helps us prioritize which fixes to verify and improve.

Prevention

  • Add a CI check that diffs key outputs after upgrades (OpenAPI schema snapshots, JSON payload shapes, CLI output).
  • Upgrade behind a canary and run integration tests against the canary before 100% rollout.
  • Track RSS + object counts after deployments; alert on monotonic growth and GC pressure.
  • Add a long-running test that repeats the failing call path and asserts stable memory.

Version Compatibility Table

VersionStatus
0.13.0 Fixed

Related Issues

No related fixes found.

Sources

We don’t republish the full GitHub discussion text. Use the links above for context.