The Fix
pip install celery==5.0.3
Based on closed celery/celery issue #6414 · PR/commit linked
Production note: This usually shows up under retries/timeouts. Treat it as a side-effect risk until you can verify behavior with a canary + real traffic.
@@ -206,6 +206,8 @@ class name.
registry_cls = 'celery.app.registry:TaskRegistry'
+ #: Thread local storage.
+ _local = None
_fixups = None
Option A — Upgrade to fixed release\npip install celery==5.0.3\nWhen NOT to use: This fix should not be used if the application does not utilize multithreading.\n\n
Why This Fix Works in Production
- Trigger: - [X] I have included all related issues and possible duplicate issues
- Mechanism: Race conditions occur due to shared resources between threads in Celery backends
- Why the fix works: Addresses thread safety issues in Celery backends by storing backend and unique identifiers in thread-local storage, preventing race conditions. (first fixed release: 5.0.3).
- If left unfixed, failures can be intermittent under concurrency (hard to reproduce; shows up as sporadic 5xx/timeouts).
Why This Breaks in Prod
- Race conditions occur due to shared resources between threads in Celery backends
- Production symptom (often without a traceback): - [X] I have included all related issues and possible duplicate issues
Proof / Evidence
- GitHub issue: #6414
- Fix PR: https://github.com/celery/celery/pull/6416
- First fixed release: 5.0.3
- Reproduced locally: No (not executed)
- Last verified: 2026-02-09
- Confidence: 0.95
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.70
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“# Checklist <!-- To check an item on the list replace [ ] with [x]. --> - [x] I have verified that the issue exists against the master branch of Celery. - [x] This has already been asked to the discussion group first. - [x] I have read the”
Failure Signature (Search String)
- - [X] I have included all related issues and possible duplicate issues
- Due race conditions inside backends, Celery serving data is stalled or it returns "0x01 while expecting 0xce" errors, or others (depends on backend) - see
Copy-friendly signature
Failure Signature
-----------------
- [X] I have included all related issues and possible duplicate issues
Due race conditions inside backends, Celery serving data is stalled or it returns "0x01 while expecting 0xce" errors, or others (depends on backend) - see https://github.com/celery/py-amqp/issues/330, #1779, #2066
Error Message
Signature-only (no traceback captured)
Error Message
-------------
- [X] I have included all related issues and possible duplicate issues
Due race conditions inside backends, Celery serving data is stalled or it returns "0x01 while expecting 0xce" errors, or others (depends on backend) - see https://github.com/celery/py-amqp/issues/330, #1779, #2066
What Broke
Celery serving data is stalled or returns unexpected errors due to thread safety issues.
Why It Broke
Race conditions occur due to shared resources between threads in Celery backends
Fix Options (Details)
Option A — Upgrade to fixed release Safe default (recommended)
pip install celery==5.0.3
Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.
Fix reference: https://github.com/celery/celery/pull/6416
First fixed release: 5.0.3
Last verified: 2026-02-09. Validate in your environment.
When NOT to Use This Fix
- This fix should not be used if the application does not utilize multithreading.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Add a stress test that runs high-concurrency workloads and fails on thread dumps / blocked locks.
- Enable watchdog dumps in prod (faulthandler, thread dump endpoint) to capture deadlocks quickly.
Version Compatibility Table
| Version | Status |
|---|---|
| 5.0.3 | Fixed |
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.