Jump to solution
Verify

The Fix

Upgrade to version 0.7.5 or later.

Based on closed encode/httpx issue #1414 · PR/commit linked

Production note: This tends to surface only under concurrency. Reproduce with load tests and watch for lock contention/cancellation paths.

Jump to Verify Open PR/Commit
@@ -48,7 +48,7 @@ The async response methods are: * `.close()` -If you're making [parallel requests](../parallel/), then you'll also need to use an async API: +If you're making [parallel requests](parallel.md), then you'll also need to use an async API:
repro.py
class LuckyScraper(httpx.AsyncClient): def __init__(self, **kwargs): super().__init__(**kwargs) async def get(self, url: str, **kwargs) -> object: return await self.request("GET", url, **kwargs) async def request(self, method: str, url: str, **kwargs) -> object: attempts = 0 retries = 10 while attempts < retries: res = await super().request(method, url, **kwargs) if res and not res.is_error: return res attempts += 1 await asyncio.sleep(5) sslCtx = getCustomSslCtx() proxies = { ... } async with LuckyScraper(verify=sslCtx, proxies=proxies, timeout=20, http2=True) as client: headers = OrderedDict([ ('accept', '*/*'), #other headers ]) response = await client.get(url, headers=headers)
verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
fix.md
Option A — Upgrade to fixed release\nUpgrade to version 0.7.5 or later.\nWhen NOT to use: This fix is not applicable if the server's stream limits cannot be adjusted.\n\nOption C — Workaround\nto this issue, but I have found cookie importing and exporting to be flaky so I want to avoid this. If you instantiate httpx with a cookie that the server then updates, it seems difficult to avoid duplicate cookies. Duplicate cookies are then sent in subsequent requests and also crash `dict()` casting of the client cookie jar. I have encountered this a few times. Setting path and domain helps but does not cover every situation.\nWhen NOT to use: This fix is not applicable if the server's stream limits cannot be adjusted.\n\n

Why This Fix Works in Production

  • Trigger: [2020-12-05 17:00:10] [WORKER f41f826]: <GET> Error making request to 'https://www.redacted.com/site/redacted.html'. Max outbound streams is 100, 100 open
  • Mechanism: The client fails to handle too many concurrent outbound streams, leading to errors
  • Why the fix works: Fixed a broken link in the async documentation related to parallel requests. (first fixed release: 0.7.5).
Production impact:
  • If left unfixed, failures can be intermittent under concurrency (hard to reproduce; shows up as sporadic 5xx/timeouts).

Why This Breaks in Prod

  • The client fails to handle too many concurrent outbound streams, leading to errors
  • Surfaces as: [2020-12-05 17:00:10] [WORKER f41f826]: <GET> Error making request to 'https://www.redacted.com/site/redacted.html'. Max outbound streams is 100, 100 open

Proof / Evidence

  • GitHub issue: #1414
  • Fix PR: https://github.com/encode/httpcore/pull/440
  • First fixed release: 0.7.5
  • Reproduced locally: No (not executed)
  • Last verified: 2026-02-09
  • Confidence: 0.70
  • Did this fix it?: Yes (upstream fix exists)
  • Own content ratio: 0.30

Discussion

High-signal excerpts from the issue thread (symptoms, repros, edge-cases).

“@valiant1x I pushed https://github.com/encode/httpcore/pull/253 with a naive attempt at resolving this based on my previous answer. You can test it by installing HTTPCore from the…”
@florimondmanca · 2020-12-12 · confirmation · source
“@valiant1x Thanks for opening this, could you perhaps share the following..”
@florimondmanca · 2020-12-05 · repro detail · source
“@valiant1x When you write « queue mode », do you mean the server returning 503 Service Unavailable responses, which causes your custom get method to…”
@florimondmanca · 2020-12-05 · source
“@florimondmanca I think the internal value of self.max_streams_semaphore dose not represent the actual value of self.h2_state.open_outbound_streams in AsyncHTTP2Connection”
@kice · 2021-05-26 · source

Failure Signature (Search String)

  • [2020-12-05 17:00:10] [WORKER f41f826]: <GET> Error making request to 'https://www.redacted.com/site/redacted.html'. Max outbound streams is 100, 100 open

Error Message

Stack trace
error.txt
Error Message ------------- [2020-12-05 17:00:10] [WORKER f41f826]: <GET> Error making request to 'https://www.redacted.com/site/redacted.html'. Max outbound streams is 100, 100 open File "C:\Users\valiant\Documents\LuckySuite\Client\TaskServer\modules\plugin_LuckyScraper.py", line 149, in request res = await super().request(method, url, **kwargs) File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpx\_client.py", line 1371, in request response = await self.send( File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpx\_client.py", line 1406, in send response = await self._send_handling_auth( File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpx\_client.py", line 1444, in _send_handling_auth response = await self._send_handling_redirects( File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpx\_client.py", line 1476, in _send_handling_redirects response = await self._send_single_request(request, timeout) File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpx\_client.py", line 1502, in _send_single_request (status_code, headers, stream, ext,) = await transport.arequest( File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpcore\_async\http_proxy.py", line 124, in arequest return await self._tunnel_request( File "C:\Users\valiant\Documents\LuckySuite\venv\lib\s ... (truncated) ...
Stack trace
error.txt
Error Message ------------- File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpx\_exceptions.py", line 326, in map_exceptions yield File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpx\_client.py", line 1502, in _send_single_request (status_code, headers, stream, ext,) = await transport.arequest( File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpcore\_async\http_proxy.py", line 124, in arequest return await self._tunnel_request( File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpcore\_async\http_proxy.py", line 258, in _tunnel_request (status_code, headers, stream, ext) = await connection.arequest( File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpcore\_async\connection.py", line 106, in arequest return await self.connection.arequest(method, url, headers, stream, ext) File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpcore\_async\http2.py", line 119, in arequest return await h2_stream.arequest(method, url, headers, stream, ext) File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpcore\_async\http2.py", line 292, in arequest await self.send_headers(method, url, headers, has_body, timeout) File "C:\Users\valiant\Documents\LuckySuite\venv\lib\site-packages\httpcore\_async\http2.py", line 330, in send_headers await self.connec ... (truncated) ...

Minimal Reproduction

repro.py
class LuckyScraper(httpx.AsyncClient): def __init__(self, **kwargs): super().__init__(**kwargs) async def get(self, url: str, **kwargs) -> object: return await self.request("GET", url, **kwargs) async def request(self, method: str, url: str, **kwargs) -> object: attempts = 0 retries = 10 while attempts < retries: res = await super().request(method, url, **kwargs) if res and not res.is_error: return res attempts += 1 await asyncio.sleep(5) sslCtx = getCustomSslCtx() proxies = { ... } async with LuckyScraper(verify=sslCtx, proxies=proxies, timeout=20, http2=True) as client: headers = OrderedDict([ ('accept', '*/*'), #other headers ]) response = await client.get(url, headers=headers)

What Broke

Clients experience request failures when exceeding 100 concurrent outbound streams.

Why It Broke

The client fails to handle too many concurrent outbound streams, leading to errors

Fix Options (Details)

Option A — Upgrade to fixed release Safe default (recommended)

Upgrade to version 0.7.5 or later.

When NOT to use: This fix is not applicable if the server's stream limits cannot be adjusted.

Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.

Option C — Workaround Temporary workaround

to this issue, but I have found cookie importing and exporting to be flaky so I want to avoid this. If you instantiate httpx with a cookie that the server then updates, it seems difficult to avoid duplicate cookies. Duplicate cookies are then sent in subsequent requests and also crash `dict()` casting of the client cookie jar. I have encountered this a few times. Setting path and domain helps but does not cover every situation.

When NOT to use: This fix is not applicable if the server's stream limits cannot be adjusted.

Use only if you cannot change versions today. Treat this as a stopgap and remove once upgraded.

Fix reference: https://github.com/encode/httpcore/pull/440

First fixed release: 0.7.5

Last verified: 2026-02-09. Validate in your environment.

Get updates

We publish verified fixes weekly. No spam.

Subscribe

When NOT to Use This Fix

  • This fix is not applicable if the server's stream limits cannot be adjusted.

Verify Fix

verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.

Did This Fix Work in Your Case?

Quick signal helps us prioritize which fixes to verify and improve.

Prevention

  • Add a CI check that diffs key outputs after upgrades (OpenAPI schema snapshots, JSON payload shapes, CLI output).
  • Upgrade behind a canary and run integration tests against the canary before 100% rollout.
  • Add a TLS smoke test that performs a real handshake in CI (include CA bundle validation and hostname checks).
  • Alert on handshake failures by error string and endpoint to catch cert/CA changes quickly.

Version Compatibility Table

VersionStatus
0.7.5 Fixed

Related Issues

No related fixes found.

Sources

We don’t republish the full GitHub discussion text. Use the links above for context.