The Fix
Upgrade to version 0.21.0 or later.
Based on closed Kludex/uvicorn issue #371 · PR/commit linked
@@ -80,6 +80,10 @@ Using Uvicorn with watchfiles will enable the following options (which are other
**Options:** *'auto', 'asgi3', 'asgi2', 'wsgi'.* **Default:** *'auto'*.
+!!! warning
+ Uvicorn's native WSGI implementation is deprecated, you should switch
+ to [a2wsgi](https://github.com/abersheeran/a2wsgi) (`pip install a2wsgi`).
Upload timings for wsgi/a2wsgi: [
1.3421823978424072, 1.4076666831970215, 1.4279963970184326, 1.448310375213623,
2.0521488189697266, 2.1042470932006836, 2.168593168258667, 2.179323434829712,
2.662322998046875, 2.7450664043426514].
Highest latency: 0.025380373001098633.
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Option A — Upgrade to fixed release\nUpgrade to version 0.21.0 or later.\nWhen NOT to use: This fix is not suitable for applications that require the original WSGI middleware behavior.\n\n
Why This Fix Works in Production
- Trigger: Hi! I'm deploying a Django app with uvicorn, running on k8s. Our containers were being killed, and I've found that when users upload large files, uvicorn…
- Mechanism: Uvicorn's WSGI middleware loads the entire request body into memory instead of streaming it
- Why the fix works: Replaces the current WSGIMiddleware implementation with a2wsgi.WSGIMiddleware, addressing memory issues during file uploads. (first fixed release: 0.21.0).
- If left unfixed, this can cause silent data inconsistencies that propagate (bad cache entries, incorrect downstream decisions).
Why This Breaks in Prod
- Uvicorn's WSGI middleware loads the entire request body into memory instead of streaming it
- Production symptom (often without a traceback): Hi! I'm deploying a Django app with uvicorn, running on k8s. Our containers were being killed, and I've found that when users upload large files, uvicorn increases memory usage, and slows to a crawl. Eventually causing an OOM.
Proof / Evidence
- GitHub issue: #371
- Fix PR: https://github.com/kludex/uvicorn/pull/1825
- First fixed release: 0.21.0
- Reproduced locally: No (not executed)
- Last verified: 2026-02-09
- Confidence: 0.75
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.55
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“So both channels, and uvicorn's WSGI middleware consume the entire request body into memory rather than streaming it (due to some complexities of bridging across…”
“Maybe using tempfile.SpooledTemporaryFile instead is a simple and easy good idea.”
“@peterlandry @tomchristie What do you think of my newly submitted pr? Use bytearray to solve it.”
“Simply replacing our previous naive byte concatenation with https://github.com/encode/uvicorn/pull/1329 makes a massive difference here”
Failure Signature (Search String)
- Hi! I'm deploying a Django app with uvicorn, running on k8s. Our containers were being killed, and I've found that when users upload large files, uvicorn increases memory usage,
- Upload timings for asgi: [1.972560167312622, 2.0667288303375244, 2.0667288303375244, 2.0784835815429688, 2.5489165782928467, 2.6735644340515137, 2.724545955657959,
Copy-friendly signature
Failure Signature
-----------------
Hi! I'm deploying a Django app with uvicorn, running on k8s. Our containers were being killed, and I've found that when users upload large files, uvicorn increases memory usage, and slows to a crawl. Eventually causing an OOM.
Upload timings for asgi: [1.972560167312622, 2.0667288303375244, 2.0667288303375244, 2.0784835815429688, 2.5489165782928467, 2.6735644340515137, 2.724545955657959, 2.724545955657959, 2.735142707824707, 2.735142707824707].
Error Message
Signature-only (no traceback captured)
Error Message
-------------
Hi! I'm deploying a Django app with uvicorn, running on k8s. Our containers were being killed, and I've found that when users upload large files, uvicorn increases memory usage, and slows to a crawl. Eventually causing an OOM.
Upload timings for asgi: [1.972560167312622, 2.0667288303375244, 2.0667288303375244, 2.0784835815429688, 2.5489165782928467, 2.6735644340515137, 2.724545955657959, 2.724545955657959, 2.735142707824707, 2.735142707824707].
Minimal Reproduction
Upload timings for wsgi/a2wsgi: [
1.3421823978424072, 1.4076666831970215, 1.4279963970184326, 1.448310375213623,
2.0521488189697266, 2.1042470932006836, 2.168593168258667, 2.179323434829712,
2.662322998046875, 2.7450664043426514].
Highest latency: 0.025380373001098633.
What Broke
Large file uploads cause OOM errors and slow performance in production environments.
Why It Broke
Uvicorn's WSGI middleware loads the entire request body into memory instead of streaming it
Fix Options (Details)
Option A — Upgrade to fixed release Safe default (recommended)
Upgrade to version 0.21.0 or later.
Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.
Fix reference: https://github.com/kludex/uvicorn/pull/1825
First fixed release: 0.21.0
Last verified: 2026-02-09. Validate in your environment.
When NOT to Use This Fix
- This fix is not suitable for applications that require the original WSGI middleware behavior.
Verify Fix
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Track RSS + object counts after deployments; alert on monotonic growth and GC pressure.
- Add a long-running test that repeats the failing call path and asserts stable memory.
Version Compatibility Table
| Version | Status |
|---|---|
| 0.21.0 | Fixed |
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.