The Fix
Upgrade to version 0.27.1 or later.
Based on closed Kludex/uvicorn issue #1637 · PR/commit linked
Production note: This tends to surface only under concurrency. Reproduce with load tests and watch for lock contention/cancellation paths.
@@ -176,6 +176,20 @@ def set_protocol(self, protocol):
+class MockTimerHandle:
+ def __init__(self, loop_later_list, delay, callback, args):
+ self.loop_later_list = loop_later_list
import asyncio
import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
await asyncio.sleep(3)
return {"msg": "Hello World"}
async def main():
# Start uvicorn in a background task
config = uvicorn.Config(app, port=8000, timeout_keep_alive=1)
server = uvicorn.Server(config)
uvicorn_task = asyncio.create_task(server.serve())
# After it starts, try making two HTTP requests.
await asyncio.sleep(1)
print("Sending requests")
reader, writer = await asyncio.open_connection("localhost", 8000)
writer.write(b"GET / HTTP/1.1\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n")
writer.write(b"GET / HTTP/1.1\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n")
await writer.drain()
while data := await reader.read(100):
print(data.decode("utf-8"))
print("Server closed the connection")
server.should_exit = True
await uvicorn_task
if __name__ == "__main__":
asyncio.run(main())
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Option A — Upgrade to fixed release\nUpgrade to version 0.27.1 or later.\nWhen NOT to use: This fix should not be applied if the server's performance is consistently adequate.\n\n
Why This Fix Works in Production
- Trigger: Exception in callback H11Protocol.timeout_keep_alive_handler()
- Mechanism: The keep-alive timer was firing prematurely when processing pipelined requests
- Why the fix works: Fixes spurious LocalProtocolError errors when processing pipelined requests, which were caused by the keep-alive timer firing prematurely. (first fixed release: 0.27.1).
- If left unfixed, failures can be intermittent under concurrency (hard to reproduce; shows up as sporadic 5xx/timeouts).
Why This Breaks in Prod
- Shows up under Python 3.7 in real deployments (not just unit tests).
- The keep-alive timer was firing prematurely when processing pipelined requests
- Surfaces as: Exception in callback H11Protocol.timeout_keep_alive_handler()
Proof / Evidence
- GitHub issue: #1637
- Fix PR: https://github.com/encode/uvicorn/pull/2243
- First fixed release: 0.27.1
- Reproduced locally: No (not executed)
- Last verified: 2026-02-09
- Confidence: 0.85
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.20
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“## >>> here is more information <<< ### how to start the server ### versions”
“I've reopened. I'll check this in some days. Thanks.”
“- Closed by https://github.com/encode/uvicorn/pull/2243”
“It seems unregister keepalive before RequestResponseCycle not fix the root cause”
Failure Signature (Search String)
- Exception in callback H11Protocol.timeout_keep_alive_handler()
Error Message
Stack trace
Error Message
-------------
Exception in callback H11Protocol.timeout_keep_alive_handler()
handle: <TimerHandle when=3544453.523044797 H11Protocol.timeout_keep_alive_handler()>
Traceback (most recent call last):
File "/home/r.yang/miniconda3/envs/py37/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/home/r.yang/miniconda3/envs/py37/lib/python3.7/site-packages/uvicorn/protocols/http/h11_impl.py", line 360, in timeout_keep_alive_handler
self.conn.send(event)
File "/home/r.yang/miniconda3/envs/py37/lib/python3.7/site-packages/h11/_connection.py", line 510, in send
data_list = self.send_with_data_passthrough(event)
File "/home/r.yang/miniconda3/envs/py37/lib/python3.7/site-packages/h11/_connection.py", line 535, in send_with_data_passthrough
self._process_event(self.our_role, event)
File "/home/r.yang/miniconda3/envs/py37/lib/python3.7/site-packages/h11/_connection.py", line 272, in _process_event
self._cstate.process_event(role, type(event), server_switch_event)
File "/home/r.yang/miniconda3/envs/py37/lib/python3.7/site-packages/h11/_state.py", line 289, in process_event
self._fire_event_triggered_transitions(role, _event_type)
File "/home/r.yang/miniconda3/envs/py37/lib/python3.7/site-packages/h11/_state.py", line 311, in _fire_event_triggered_transitions
) from None
h11._util.LocalProtocolError: can't handle e
... (truncated) ...
Stack trace
Error Message
-------------
Exception in callback H11Protocol.timeout_keep_alive_handler()
handle: <TimerHandle when=335245.867740771 H11Protocol.timeout_keep_alive_handler()>
Traceback (most recent call last):
File "/usr/lib/python3.12/asyncio/events.py", line 84, in _run
self._context.run(self._callback, *self._args)
File "venv/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 363, in timeout_keep_alive_handler
self.conn.send(event)
File "venv/lib/python3.12/site-packages/h11/_connection.py", line 512, in send
data_list = self.send_with_data_passthrough(event)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "venv/lib/python3.12/site-packages/h11/_connection.py", line 537, in send_with_data_passthrough
self._process_event(self.our_role, event)
File "venv/lib/python3.12/site-packages/h11/_connection.py", line 272, in _process_event
self._cstate.process_event(role, type(event), server_switch_event)
File "venv/lib/python3.12/site-packages/h11/_state.py", line 293, in process_event
self._fire_event_triggered_transitions(role, _event_type)
File "venv/lib/python3.12/site-packages/h11/_state.py", line 311, in _fire_event_triggered_transitions
raise LocalProtocolError(
h11._util.LocalProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE
Stack trace
Error Message
-------------
INFO: 127.0.0.1:47416 - "GET / HTTP/1.1" 200 OK
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "venv/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "venv/lib/python3.12/site-packages/fastapi/applications.py", line 1106, in __call__
await super().__call__(scope, receive, send)
File "venv/lib/python3.12/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "venv/lib/python3.12/site-packages/fastapi/middleware/a
... (truncated) ...
Minimal Reproduction
import asyncio
import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
await asyncio.sleep(3)
return {"msg": "Hello World"}
async def main():
# Start uvicorn in a background task
config = uvicorn.Config(app, port=8000, timeout_keep_alive=1)
server = uvicorn.Server(config)
uvicorn_task = asyncio.create_task(server.serve())
# After it starts, try making two HTTP requests.
await asyncio.sleep(1)
print("Sending requests")
reader, writer = await asyncio.open_connection("localhost", 8000)
writer.write(b"GET / HTTP/1.1\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n")
writer.write(b"GET / HTTP/1.1\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n")
await writer.drain()
while data := await reader.read(100):
print(data.decode("utf-8"))
print("Server closed the connection")
server.should_exit = True
await uvicorn_task
if __name__ == "__main__":
asyncio.run(main())
Environment
- Python: 3.7
What Broke
Clients experienced spurious LocalProtocolError errors during high request rates.
Why It Broke
The keep-alive timer was firing prematurely when processing pipelined requests
Fix Options (Details)
Option A — Upgrade to fixed release Safe default (recommended)
Upgrade to version 0.27.1 or later.
Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.
Fix reference: https://github.com/encode/uvicorn/pull/2243
First fixed release: 0.27.1
Last verified: 2026-02-09. Validate in your environment.
When NOT to Use This Fix
- This fix should not be applied if the server's performance is consistently adequate.
Verify Fix
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Add a stress test that runs high-concurrency workloads and fails on thread dumps / blocked locks.
- Enable watchdog dumps in prod (faulthandler, thread dump endpoint) to capture deadlocks quickly.
- Make timeouts explicit and test them (unit + integration) to avoid silent behavior changes.
- Instrument retries (attempt count + reason) and alert on spikes to catch dependency slowdowns.
Version Compatibility Table
| Version | Status |
|---|---|
| 0.27.1 | Fixed |
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.