The Fix
pip install urllib3==1.25
Based on closed urllib3/urllib3 issue #644 · PR/commit linked
Production note: Watch p95/p99 latency and retry volume; timeouts can turn into retry storms and duplicate side-effects.
@@ -12,6 +12,9 @@ dev (master)
compatible for now, but please migrate). (Issue #640)
+* Fix pools not getting replenished when an error occurs during a
+ request using ``release_conn=False``. (Issue #644)
+
# setting the poolsize to 1 and 1 retries will result in this program
# hanging on the first retry attempt (failed connection or read timeout)
# set the pool_maxsize=4 and set some trace statements in connectionpool.py
# to print the queue size in _get_conn and _put_conn and watch it slowly
# decrease as we hit timeouts and retries.....
adpater = requests.adapters.HTTPAdapter(pool_connections=1, pool_maxsize=1,
pool_block=True, max_retries=1)
session.mount('http://', adpater)
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Option A — Upgrade to fixed release\npip install urllib3==1.25\nWhen NOT to use: Do not use this fix if connections need to be retained after errors for further processing.\n\n
Why This Fix Works in Production
- Trigger: Connection pool exhausted when connection failures occur, should refill with empty connections
- Mechanism: Connections are not returned to the pool after timeout errors, leading to pool exhaustion
- Why the fix works: Fixes the issue of connection pools not being replenished when an error occurs during a request using `release_conn=False`. (first fixed release: 1.25).
- If left unfixed, this can cause silent data inconsistencies that propagate (bad cache entries, incorrect downstream decisions).
Why This Breaks in Prod
- Connections are not returned to the pool after timeout errors, leading to pool exhaustion
- Production symptom (often without a traceback): Connection pool exhausted when connection failures occur, should refill with empty connections
Proof / Evidence
- GitHub issue: #644
- Fix PR: https://github.com/urllib3/urllib3/pull/647
- First fixed release: 1.25
- Reproduced locally: No (not executed)
- Last verified: 2026-02-09
- Confidence: 0.95
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.59
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“Hello again! I think we need to be sure that we close the connection before we return them to the pool. Otherwise we run the…”
“We should be putting a None into the pool if we want to discard the connection. That will be replaced with a fresh connection when…”
“@shazow's idea is even better than mine.”
“@jlatherfold Is this something you'd be interested in working on? Producing a failing test would be the first step. Make a ConnectionPool with a small…”
Failure Signature (Search String)
- Connection pool exhausted when connection failures occur, should refill with empty connections
- I think we need to be sure that we close the connection before we return them to the pool. Otherwise we run the risk of attempting to re-use a live connection that has timed out,
Copy-friendly signature
Failure Signature
-----------------
Connection pool exhausted when connection failures occur, should refill with empty connections
I think we need to be sure that we close the connection before we return them to the pool. Otherwise we run the risk of attempting to re-use a live connection that has timed out, which will end extremely poorly for us.
Error Message
Signature-only (no traceback captured)
Error Message
-------------
Connection pool exhausted when connection failures occur, should refill with empty connections
I think we need to be sure that we close the connection before we return them to the pool. Otherwise we run the risk of attempting to re-use a live connection that has timed out, which will end extremely poorly for us.
Minimal Reproduction
# setting the poolsize to 1 and 1 retries will result in this program
# hanging on the first retry attempt (failed connection or read timeout)
# set the pool_maxsize=4 and set some trace statements in connectionpool.py
# to print the queue size in _get_conn and _put_conn and watch it slowly
# decrease as we hit timeouts and retries.....
adpater = requests.adapters.HTTPAdapter(pool_connections=1, pool_maxsize=1,
pool_block=True, max_retries=1)
session.mount('http://', adpater)
What Broke
The application hangs due to exhausted connection pool after multiple timeout errors.
Why It Broke
Connections are not returned to the pool after timeout errors, leading to pool exhaustion
Fix Options (Details)
Option A — Upgrade to fixed release Safe default (recommended)
pip install urllib3==1.25
Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.
Fix reference: https://github.com/urllib3/urllib3/pull/647
First fixed release: 1.25
Last verified: 2026-02-09. Validate in your environment.
When NOT to Use This Fix
- Do not use this fix if connections need to be retained after errors for further processing.
Verify Fix
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Add a CI check that diffs key outputs after upgrades (OpenAPI schema snapshots, JSON payload shapes, CLI output).
- Upgrade behind a canary and run integration tests against the canary before 100% rollout.
- Add a stress test that runs high-concurrency workloads and fails on thread dumps / blocked locks.
- Enable watchdog dumps in prod (faulthandler, thread dump endpoint) to capture deadlocks quickly.
Version Compatibility Table
| Version | Status |
|---|---|
| 1.25 | Fixed |
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.