The Fix
Upgrade to version 0.13.0 or later.
Based on closed encode/httpx issue #527 · PR/commit linked
Production note: This tends to surface only under concurrency. Reproduce with load tests and watch for lock contention/cancellation paths.
@@ -51,6 +51,7 @@ def __init__(
self.stream_writer = stream_writer
self.timeout = timeout
+ self.read_lock = asyncio.Lock()
self._inner: typing.Optional[SocketStream] = None
import asyncio
import pickle
import httpx
class SendRequests:
num_parallel_requests = 0
MAX_PARALLEL = 100
async def embedd_batch(self):
requests = []
async with httpx.AsyncClient(
timeout=httpx.TimeoutConfig(timeout=60),
base_url="https://api.garaza.io/") as client:
for i in range(1000):
requests.append(self._send_to_server(client))
embeddings = await asyncio.gather(*requests)
return embeddings
async def __wait_until_released(self):
while self.num_parallel_requests >= self.MAX_PARALLEL:
await asyncio.sleep(0.1)
async def _send_to_server(self, client):
await self.__wait_until_released()
self.num_parallel_requests += 1
# simplified image loading
with open("image.pkl", "rb") as f:
im = pickle.load(f)
url = "/image/inception-v3?machine=1&session=1&retry=0"
emb = await self._send_request(client, im, url)
self.num_parallel_requests -= 1
return emb
async def _send_request(self, client, image, url):
headers = {'Content-Type': 'image/jpeg',
'Content-Length': str(len(image))}
response = await client.post(url, headers=headers, data=image)
print(response.content)
return response
if __name__ == "__main__":
cl = SendRequests()
asyncio.get_event_loop().run_until_complete(cl.embedd_batch())
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Option A — Upgrade to fixed release\nUpgrade to version 0.13.0 or later.\nWhen NOT to use: This fix is not suitable for single-threaded applications or non-HTTP/2 contexts.\n\n
Why This Fix Works in Production
- Trigger: asyncio.get_event_loop().run_until_complete(cl.embedd_batch())
- Mechanism: A race condition caused a RuntimeError when reading streams concurrently in HTTP/2
- Why the fix works: Fixes a race condition that caused a RuntimeError when reading streams concurrently in HTTP/2. (first fixed release: 0.13.0).
- If left unfixed, failures can be intermittent under concurrency (hard to reproduce; shows up as sporadic 5xx/timeouts).
Why This Breaks in Prod
- Shows up under Python 3.7 in real deployments (not just unit tests).
- A race condition caused a RuntimeError when reading streams concurrently in HTTP/2
- Surfaces as: Traceback (most recent call last):
Proof / Evidence
- GitHub issue: #527
- Fix PR: https://github.com/encode/httpx/pull/535
- First fixed release: 0.13.0
- Reproduced locally: No (not executed)
- Last verified: 2026-02-09
- Confidence: 0.75
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.22
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“@iluxonchik posted another example in #382, which I'm reposting here in a runnable form: This program runs fine on my side, on 3.7.3 and 3.8.0.…”
“@PrimozGodec I was able to run your code and reproduce the error you listed (on 3.8.0)”
“@iluxonchik Thanks for the detailed debugging material”
“The read() error seems to be solved. I already go more responses when I run code from my example above. With the same script (code…”
Failure Signature (Search String)
- asyncio.get_event_loop().run_until_complete(cl.embedd_batch())
Error Message
Stack trace
Error Message
-------------
Traceback (most recent call last):
File "/Users/primoz/PycharmProjects/orange3-imageanalytics/example.py", line 63, in <module>
asyncio.get_event_loop().run_until_complete(cl.embedd_batch())
File "/Users/primoz/miniconda3/lib/python3.7/asyncio/base_events.py", line 584, in run_until_complete
return future.result()
File "/Users/primoz/PycharmProjects/orange3-imageanalytics/example.py", line 18, in embedd_batch
embeddings = await asyncio.gather(*requests)
File "/Users/primoz/PycharmProjects/orange3-imageanalytics/example.py", line 41, in _send_to_server
emb = await self._send_request(client, im, url)
File "/Users/primoz/PycharmProjects/orange3-imageanalytics/example.py", line 56, in _send_request
data=image
File "/Users/primoz/venv/orange/lib/python3.7/site-packages/httpx/client.py", line 484, in post
trust_env=trust_env,
File "/Users/primoz/venv/orange/lib/python3.7/site-packages/httpx/client.py", line 626, in request
trust_env=trust_env,
File "/Users/primoz/venv/orange/lib/python3.7/site-packages/httpx/client.py", line 650, in send
trust_env=trust_env,
File "/Users/primoz/venv/orange/lib/python3.7/site-packages/httpx/client.py", line 265, in _get_response
return await get_response(request)
File "/Users/primoz/venv/orange/lib/python3.7/site-packages/httpx/middleware/redirect.py", line 31, in __call__
response = await ge
... (truncated) ...
Stack trace
Error Message
-------------
unhandled exception during asyncio.run() shutdown
task: <Task finished name='Task-115' coro=<print_content_from_url() done, defined at demo.py:4> exception=KeyError(HTTPConnection(origin=Origin(scheme='https' host='www.google.com' port=443)))>
Traceback (most recent call last):
File "/Users/iluxonchik/.local/share/virtualenvs/tmp-agwWamBd/lib/python3.8/site-packages/httpx/dispatch/connection_pool.py", line 120, in send
response = await connection.send(
File "/Users/iluxonchik/.local/share/virtualenvs/tmp-agwWamBd/lib/python3.8/site-packages/httpx/dispatch/connection.py", line 62, in send
response = await self.h2_connection.send(request, timeout=timeout)
File "/Users/iluxonchik/.local/share/virtualenvs/tmp-agwWamBd/lib/python3.8/site-packages/httpx/dispatch/http2.py", line 57, in send
status_code, headers = await self.receive_response(stream_id, timeout)
File "/Users/iluxonchik/.local/share/virtualenvs/tmp-agwWamBd/lib/python3.8/site-packages/httpx/dispatch/http2.py", line 176, in receive_response
event = await self.receive_event(stream_id, timeout)
File "/Users/iluxonchik/.local/share/virtualenvs/tmp-agwWamBd/lib/python3.8/site-packages/httpx/dispatch/http2.py", line 211, in receive_event
data = await self.stream.read(self.READ_NUM_BYTES, timeout, flag=flag)
File "/Users/iluxonchik/.local/share/virtualenvs/tmp-agwWamBd/lib/python3.8/site-packages
... (truncated) ...
Minimal Reproduction
import asyncio
import pickle
import httpx
class SendRequests:
num_parallel_requests = 0
MAX_PARALLEL = 100
async def embedd_batch(self):
requests = []
async with httpx.AsyncClient(
timeout=httpx.TimeoutConfig(timeout=60),
base_url="https://api.garaza.io/") as client:
for i in range(1000):
requests.append(self._send_to_server(client))
embeddings = await asyncio.gather(*requests)
return embeddings
async def __wait_until_released(self):
while self.num_parallel_requests >= self.MAX_PARALLEL:
await asyncio.sleep(0.1)
async def _send_to_server(self, client):
await self.__wait_until_released()
self.num_parallel_requests += 1
# simplified image loading
with open("image.pkl", "rb") as f:
im = pickle.load(f)
url = "/image/inception-v3?machine=1&session=1&retry=0"
emb = await self._send_request(client, im, url)
self.num_parallel_requests -= 1
return emb
async def _send_request(self, client, image, url):
headers = {'Content-Type': 'image/jpeg',
'Content-Length': str(len(image))}
response = await client.post(url, headers=headers, data=image)
print(response.content)
return response
if __name__ == "__main__":
cl = SendRequests()
asyncio.get_event_loop().run_until_complete(cl.embedd_batch())
Environment
- Python: 3.7
What Broke
RuntimeError occurs when sending multiple images to the server concurrently.
Why It Broke
A race condition caused a RuntimeError when reading streams concurrently in HTTP/2
Fix Options (Details)
Option A — Upgrade to fixed release Safe default (recommended)
Upgrade to version 0.13.0 or later.
Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.
Fix reference: https://github.com/encode/httpx/pull/535
First fixed release: 0.13.0
Last verified: 2026-02-09. Validate in your environment.
When NOT to Use This Fix
- This fix is not suitable for single-threaded applications or non-HTTP/2 contexts.
Verify Fix
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Add a CI check that diffs key outputs after upgrades (OpenAPI schema snapshots, JSON payload shapes, CLI output).
- Upgrade behind a canary and run integration tests against the canary before 100% rollout.
- Add a stress test that runs high-concurrency workloads and fails on thread dumps / blocked locks.
- Enable watchdog dumps in prod (faulthandler, thread dump endpoint) to capture deadlocks quickly.
Version Compatibility Table
| Version | Status |
|---|---|
| 0.13.0 | Fixed |
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.