The Fix
pip install celery==5.6.0
Based on closed celery/celery issue #9773 · PR/commit linked
Production note: This usually shows up under retries/timeouts. Treat it as a side-effect risk until you can verify behavior with a canary + real traffic.
@@ -2234,7 +2234,8 @@ def run(self, header, body, partial_args, app=None, interval=None,
body.options.update(options)
- bodyres = body.freeze(task_id, root_id=root_id)
+ body_task_id = task_id or uuid()
+ bodyres = body.freeze(body_task_id, group_id=group_id, root_id=root_id)
# bug_task.py
"""
Bug reproduction tasks for the pytest-celery bug report.
These tasks are used to reproduce the ValueError: task_id must not be empty
that occurs when a group with a failing task is part of a chain.
"""
import time
from celery import shared_task
@shared_task
def add(x, y):
"""Simple addition task with sleep to simulate work"""
print(f"Adding {x} + {y}")
time.sleep(1)
result = x + y
print(f"Result: {result}")
return result
@shared_task
def failc():
"""Task that always fails to trigger the bug"""
print("About to fail...")
raise ValueError("failc - intentional failure")
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Option A — Upgrade to fixed release\npip install celery==5.6.0\nWhen NOT to use: This fix is not applicable if task IDs are managed externally and must remain None.\n\n
Why This Fix Works in Production
- Trigger: -------------- [email protected] v5.5.3 (immunity)
- Mechanism: The task_id can be None when a chord is called without an explicit ID, causing errors
- Why the fix works: Fixes the issue where the task_id must not be empty when using a chain as the body of a chord. (first fixed release: 5.6.0).
- If left unfixed, the same config can fail only in production (env differences), causing startup failures or partial feature outages.
Why This Breaks in Prod
- Shows up under Python 3.11 in real deployments (not just unit tests).
- The task_id can be None when a chord is called without an explicit ID, causing errors
- Surfaces as: -------------- [email protected] v5.5.3 (immunity)
Proof / Evidence
- GitHub issue: #9773
- Fix PR: https://github.com/celery/celery/pull/9774
- First fixed release: 5.6.0
- Reproduced locally: No (not executed)
- Last verified: 2026-02-09
- Confidence: 0.85
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.35
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“<!-- Please fill this template entirely and do not erase parts of it. We reserve the right to close without a response bug reports which are incomplete. --> # Checklist <!-- To check an item on the list replace [ ] with [x]. --> - [x] I hav”
Failure Signature (Search String)
- -------------- [email protected] v5.5.3 (immunity)
Error Message
Stack trace
Error Message
-------------
-------------- [email protected] v5.5.3 (immunity)
--- ***** -----
-- ******* ---- macOS-15.5-arm64-arm-64bit 2025-06-19 23:39:11
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: celery_failing:0x104befdd0
- ** ---------- .> transport: redis://localhost:6379//
- ** ---------- .> results: redis://localhost:6379/
- *** --- * --- .> concurrency: 10 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. tasks.add
. tasks.failc
[2025-06-19 23:39:11,730: INFO/MainProcess] Connected to redis://localhost:6379//
[2025-06-19 23:39:11,750: INFO/MainProcess] mingle: searching for neighbors
[2025-06-19 23:39:12,789: INFO/MainProcess] mingle: all alone
[2025-06-19 23:39:12,824: INFO/MainProcess] [email protected] ready.
[2025-06-19 23:39:16,731: INFO/MainProcess] Task tasks.add[2069eb9f-b57e-48e4-98d8-595efd5f5922] received
[2025-06-19 23:39:16,735: INFO/MainProcess] Task tasks.failc[f9032a00-4ee6-4430-bf27-eb04f3d66cd7] received
[2025-06-19 23:39:16,748: ERROR/ForkPoolWorker-1] Task tasks.failc[f9032a00-4ee6-4430-bf27-eb04f3d66cd7] raised unexpected: Exception('failc')
Traceback (most recent call last):
File "/Users/diego/Development/my_celery_project/
... (truncated) ...
Minimal Reproduction
# bug_task.py
"""
Bug reproduction tasks for the pytest-celery bug report.
These tasks are used to reproduce the ValueError: task_id must not be empty
that occurs when a group with a failing task is part of a chain.
"""
import time
from celery import shared_task
@shared_task
def add(x, y):
"""Simple addition task with sleep to simulate work"""
print(f"Adding {x} + {y}")
time.sleep(1)
result = x + y
print(f"Result: {result}")
return result
@shared_task
def failc():
"""Task that always fails to trigger the bug"""
print("About to fail...")
raise ValueError("failc - intentional failure")
Environment
- Python: 3.11
What Broke
Error handling fails with 'task_id must not be empty' when using a chord with a failing group task.
Why It Broke
The task_id can be None when a chord is called without an explicit ID, causing errors
Fix Options (Details)
Option A — Upgrade to fixed release Safe default (recommended)
pip install celery==5.6.0
Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.
Fix reference: https://github.com/celery/celery/pull/9774
First fixed release: 5.6.0
Last verified: 2026-02-09. Validate in your environment.
When NOT to Use This Fix
- This fix is not applicable if task IDs are managed externally and must remain None.
Verify Fix
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Capture the exact failing error string in logs and tests so you can reproduce via a minimal script.
- Pin production dependencies and upgrade only with a reproducible test that hits the failing path.
Version Compatibility Table
| Version | Status |
|---|---|
| 5.6.0 | Fixed |
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.