The Fix
pip install celery==5.6.0
Based on closed celery/celery issue #4857 · PR/commit linked
Production note: Watch p95/p99 latency and retry volume; timeouts can turn into retry storms and duplicate side-effects.
@@ -32,6 +32,7 @@ coverage.xml
pip-wheel-metadata/
.python-version
+.tool-versions
.vscode/
integration-tests-config.json
start redis server
start a celery worker
call task.apply_async() once (tasks executes and output is returned)
restart redis server
call task.apply_async() again (caller hangs forever)
Follow the reproduction steps, confirm the failure, apply the fix, and repeat the same steps to verify the behavior changes.
Option A — Upgrade to fixed release\npip install celery==5.6.0\nWhen NOT to use: This fix is not applicable if the underlying issue is unrelated to greenlet errors.\n\n
Why This Fix Works in Production
- Trigger: Redis results backend: apply_async().get() hangs forever after disconnection from redis-server
- Mechanism: The greenlet drainer stops retrieving task results due to errors in the spawned greenlet
- Why the fix works: upstream changes in 5.6.0 address the mechanism above.
- If left unfixed, this can cause silent data inconsistencies that propagate (bad cache entries, incorrect downstream decisions).
Why This Breaks in Prod
- The greenlet drainer stops retrieving task results due to errors in the spawned greenlet
- Surfaces as: Redis results backend: apply_async().get() hangs forever after disconnection from redis-server
Proof / Evidence
- GitHub issue: #4857
- Fix PR: https://github.com/celery/celery/pull/9371
- First fixed release: 5.6.0
- Reproduced locally: No (not executed)
- Last verified: 2026-02-09
- Confidence: 0.70
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.68
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“@georgepsarakis a”
“lets continue the discussion there instead? or this can be open as a separate related issue?”
“@auvipy if by "there" you mean in https://github.com/celery/celery/issues/4556 - while the two look similar, they're not exactly the same issue.”
“@amitlicht did you try configuring a socket timeout ?”
Failure Signature (Search String)
- Redis results backend: apply_async().get() hangs forever after disconnection from redis-server
Error Message
Stack trace
Error Message
-------------
Redis results backend: apply_async().get() hangs forever after disconnection from redis-server
Minimal Reproduction
- start redis server
- start a celery worker
- call task.apply_async() once (tasks executes and output is returned)
- restart redis server
- call task.apply_async() again (caller hangs forever)
What Broke
Clients may wait indefinitely for task results that will never be fetched.
Why It Broke
The greenlet drainer stops retrieving task results due to errors in the spawned greenlet
Fix Options (Details)
Option A — Upgrade to fixed release Safe default (recommended)
pip install celery==5.6.0
Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.
Fix reference: https://github.com/celery/celery/pull/9371
First fixed release: 5.6.0
Last verified: 2026-02-09. Validate in your environment.
When NOT to Use This Fix
- This fix is not applicable if the underlying issue is unrelated to greenlet errors.
Verify Fix
Follow the reproduction steps, confirm the failure, apply the fix, and repeat the same steps to verify the behavior changes.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Make timeouts explicit and test them (unit + integration) to avoid silent behavior changes.
- Instrument retries (attempt count + reason) and alert on spikes to catch dependency slowdowns.
Version Compatibility Table
| Version | Status |
|---|---|
| 5.6.0 | Fixed |
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.