Jump to solution
Verify

The Fix

Upgrade to version 0.12.3 or later.

Based on closed Kludex/uvicorn issue #748 · PR/commit linked

Production note: This tends to surface only under concurrency. Reproduce with load tests and watch for lock contention/cancellation paths.

Jump to Verify Open PR/Commit
@@ -113,7 +113,6 @@ def __init__(self, config, server_state, _loop=None): self.headers = None self.cycle = None - self.message_event = asyncio.Event() # Protocol interface
repro.py
import asyncio async def wait_for_disconnect(receive): while True: p = await receive() if p['type'] == 'http.disconnect': print('Disconnected!') break async def app(scope, receive, send): await asyncio.sleep(0.2) m = await receive() if m['type'] == 'lifespan.startup': await send({'type': 'lifespan.startup.complete'}) elif m['type'] == 'http.request': if scope['path'] == '/foo': asyncio.create_task(wait_for_disconnect(receive)) await asyncio.sleep(0.2) await send({'type': 'http.response.start', 'status': 404}) await send({'type': 'http.response.body', 'body': b'Not found!\n'})
verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
fix.md
Option A — Upgrade to fixed release\nUpgrade to version 0.12.3 or later.\nWhen NOT to use: This fix should not be applied if the application does not handle multiple concurrent requests.\n\n

Why This Fix Works in Production

  • Trigger: HttpToolsProtocol: receive() hangs due to signalling mixup between request cycles
  • Mechanism: A race condition in uvicorn's protocol handling leads to hanging requests
  • Why the fix works: Fixes a race condition that leads to hanging requests in uvicorn's protocol handling. (first fixed release: 0.12.3).
Production impact:
  • If left unfixed, failures can be intermittent under concurrency (hard to reproduce; shows up as sporadic 5xx/timeouts).

Why This Breaks in Prod

  • A race condition in uvicorn's protocol handling leads to hanging requests
  • Production symptom (often without a traceback): HttpToolsProtocol: receive() hangs due to signalling mixup between request cycles

Proof / Evidence

  • GitHub issue: #748
  • Fix PR: https://github.com/kludex/uvicorn/pull/848
  • First fixed release: 0.12.3
  • Reproduced locally: No (not executed)
  • Last verified: 2026-02-09
  • Confidence: 0.85
  • Did this fix it?: Yes (upstream fix exists)
  • Own content ratio: 0.50

Discussion

High-signal excerpts from the issue thread (symptoms, repros, edge-cases).

“@euri10, I'm sorry if my description was a bit unclear, but this is a bug in uvicorn that should be fixed regardless of Starlette”
@itayperl · 2020-10-12 · source
“hi @itayperl afaik FastAPI currently pins Starlette on 0.13.6, 1st thing to check would be if that what you described still happens respecting the pin.”
@euri10 · 2020-10-12 · source
“could you either confirm you experience this on 0.13.8 too (because can't remember exactly but I think 0.13.8 reverted something in middleware that makes me…”
@euri10 · 2020-10-12 · source
“I was able to reproduce the bug on both 0.13.6 and 0.13.8 by adding the 100ms sleep in starlette as described above”
@itayperl · 2020-10-12 · source

Failure Signature (Search String)

  • HttpToolsProtocol: receive() hangs due to signalling mixup between request cycles
  • The race condition is causing starlette to keep receive()ing on a RequestResponseCycle for a bit after the response is fully sent, while the client sends a new request on the same
Copy-friendly signature
signature.txt
Failure Signature ----------------- HttpToolsProtocol: receive() hangs due to signalling mixup between request cycles The race condition is causing starlette to keep receive()ing on a RequestResponseCycle for a bit after the response is fully sent, while the client sends a new request on the same connection:

Error Message

Signature-only (no traceback captured)
error.txt
Error Message ------------- HttpToolsProtocol: receive() hangs due to signalling mixup between request cycles The race condition is causing starlette to keep receive()ing on a RequestResponseCycle for a bit after the response is fully sent, while the client sends a new request on the same connection:

Minimal Reproduction

repro.py
import asyncio async def wait_for_disconnect(receive): while True: p = await receive() if p['type'] == 'http.disconnect': print('Disconnected!') break async def app(scope, receive, send): await asyncio.sleep(0.2) m = await receive() if m['type'] == 'lifespan.startup': await send({'type': 'lifespan.startup.complete'}) elif m['type'] == 'http.request': if scope['path'] == '/foo': asyncio.create_task(wait_for_disconnect(receive)) await asyncio.sleep(0.2) await send({'type': 'http.response.start', 'status': 404}) await send({'type': 'http.response.body', 'body': b'Not found!\n'})

What Broke

Requests hang indefinitely when multiple RequestResponseCycle objects are processed concurrently.

Why It Broke

A race condition in uvicorn's protocol handling leads to hanging requests

Fix Options (Details)

Option A — Upgrade to fixed release Safe default (recommended)

Upgrade to version 0.12.3 or later.

When NOT to use: This fix should not be applied if the application does not handle multiple concurrent requests.

Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.

Option D — Guard side-effects with OnceOnly Guardrail for side-effects

Mitigate duplicate external side-effects under retries/timeouts/agent loops by gating the operation before calling external systems.

  • Place OnceOnly between your code/agent and real side-effects (Stripe, emails, CRM, APIs).
  • Use a stable key per side-effect (e.g., customer_id + action + idempotency_key).
  • Fail-safe: configure fail-open vs fail-closed based on blast radius and spend risk.
Show example snippet (optional)
onceonly.py
from onceonly import OnceOnly import os once = OnceOnly(api_key=os.environ["ONCEONLY_API_KEY"], fail_open=True) # Stable idempotency key per real side-effect. # Use a request id / job id / webhook delivery id / Stripe event id, etc. event_id = "evt_..." # replace key = f"stripe:webhook:{event_id}" res = once.check_lock(key=key, ttl=3600) if res.duplicate: return {"status": "already_processed"} # Safe to execute the side-effect exactly once. handle_event(event_id)

See OnceOnly SDK

When NOT to use: Do not use this to hide logic bugs or data corruption. Use it to block duplicate external side-effects and enforce tool permissions/spend caps.

Fix reference: https://github.com/kludex/uvicorn/pull/848

First fixed release: 0.12.3

Last verified: 2026-02-09. Validate in your environment.

Get updates

We publish verified fixes weekly. No spam.

Subscribe

When NOT to Use This Fix

  • This fix should not be applied if the application does not handle multiple concurrent requests.
  • Do not use this to hide logic bugs or data corruption. Use it to block duplicate external side-effects and enforce tool permissions/spend caps.

Verify Fix

verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.

Did This Fix Work in Your Case?

Quick signal helps us prioritize which fixes to verify and improve.

Prevention

  • Add a stress test that runs high-concurrency workloads and fails on thread dumps / blocked locks.
  • Enable watchdog dumps in prod (faulthandler, thread dump endpoint) to capture deadlocks quickly.

Version Compatibility Table

VersionStatus
0.12.3 Fixed

Related Issues

No related fixes found.

Sources

We don’t republish the full GitHub discussion text. Use the links above for context.