The Fix
pip install celery==5.5.0
Based on closed celery/celery issue #9125 · PR/commit linked
Production note: This usually shows up under retries/timeouts. Treat it as a side-effect risk until you can verify behavior with a canary + real traffic.
@@ -299,3 +299,4 @@ Tomer Nosrati, 2022/17/07
Johannes Faigle, 2024/06/18
Giovanni Giampauli, 2024/06/26
+Shamil Abdulaev, 2024/08/05
diff --git a/celery/app/task.py b/celery/app/task.py
index 5d55a747b8c..78624655c4e 100644
import pytest
@pytest.fixture
def default_worker_tasks(default_worker_tasks: set) -> set:
import tasks
default_worker_tasks.add(tasks)
return default_worker_tasks
def test_hello_world(celery_setup):
from tasks import long_task
assert long_task.s().apply_async().get() != "OK"
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Option A — Upgrade to fixed release\npip install celery==5.5.0\nWhen NOT to use: Do not apply this fix if your application relies on `soft_time_limit` exceeding `time_limit` for specific behavior.\n\n
Why This Fix Works in Production
- Trigger: - [x] I have included all related issues and possible duplicate issues
- Mechanism: The `soft_time_limit` was allowed to exceed the `time_limit`, causing unexpected task behavior
- Why the fix works: Added checks for `soft_time_limit` and `time_limit` to ensure that `soft_time_limit` does not exceed `time_limit`. (first fixed release: 5.5.0).
- If left unfixed, the same config can fail only in production (env differences), causing startup failures or partial feature outages.
Why This Breaks in Prod
- Shows up under Python 3.11 in real deployments (not just unit tests).
- The `soft_time_limit` was allowed to exceed the `time_limit`, causing unexpected task behavior
- Production symptom (often without a traceback): - [x] I have included all related issues and possible duplicate issues
Proof / Evidence
- GitHub issue: #9125
- Fix PR: https://github.com/celery/celery/pull/9173
- First fixed release: 5.5.0
- Reproduced locally: No (not executed)
- Last verified: 2026-02-09
- Confidence: 0.85
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.70
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“<!-- Please fill this template entirely and do not erase parts of it. We reserve the right to close without a response bug reports which are incomplete. --> # Checklist <!-- To check an item on the list replace [ ] with [x]. --> - [x] I hav”
Failure Signature (Search String)
- - [x] I have included all related issues and possible duplicate issues
- or possible duplicates to this issue as requested by the checklist above.
Copy-friendly signature
Failure Signature
-----------------
- [x] I have included all related issues and possible duplicate issues
or possible duplicates to this issue as requested by the checklist above.
Error Message
Signature-only (no traceback captured)
Error Message
-------------
- [x] I have included all related issues and possible duplicate issues
or possible duplicates to this issue as requested by the checklist above.
Minimal Reproduction
import pytest
@pytest.fixture
def default_worker_tasks(default_worker_tasks: set) -> set:
import tasks
default_worker_tasks.add(tasks)
return default_worker_tasks
def test_hello_world(celery_setup):
from tasks import long_task
assert long_task.s().apply_async().get() != "OK"
Environment
- Python: 3.11
What Broke
Tasks may not terminate as expected, leading to prolonged execution and resource exhaustion.
Why It Broke
The `soft_time_limit` was allowed to exceed the `time_limit`, causing unexpected task behavior
Fix Options (Details)
Option A — Upgrade to fixed release Safe default (recommended)
pip install celery==5.5.0
Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.
Option D — Guard side-effects with OnceOnly Guardrail for side-effects
Mitigate duplicate external side-effects under retries/timeouts/agent loops by gating the operation before calling external systems.
- Place OnceOnly between your code/agent and real side-effects (Stripe, emails, CRM, APIs).
- Use a stable key per side-effect (e.g., customer_id + action + idempotency_key).
- Fail-safe: configure fail-open vs fail-closed based on blast radius and spend risk.
Show example snippet (optional)
from onceonly import OnceOnly
import os
once = OnceOnly(api_key=os.environ["ONCEONLY_API_KEY"], fail_open=True)
# Stable idempotency key per real side-effect.
# Use a request id / job id / webhook delivery id / Stripe event id, etc.
event_id = "evt_..." # replace
key = f"stripe:webhook:{event_id}"
res = once.check_lock(key=key, ttl=3600)
if res.duplicate:
return {"status": "already_processed"}
# Safe to execute the side-effect exactly once.
handle_event(event_id)
Fix reference: https://github.com/celery/celery/pull/9173
First fixed release: 5.5.0
Last verified: 2026-02-09. Validate in your environment.
When NOT to Use This Fix
- Do not apply this fix if your application relies on `soft_time_limit` exceeding `time_limit` for specific behavior.
- Do not use this to hide logic bugs or data corruption. Use it to block duplicate external side-effects and enforce tool permissions/spend caps.
Verify Fix
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Capture the exact failing error string in logs and tests so you can reproduce via a minimal script.
- Pin production dependencies and upgrade only with a reproducible test that hits the failing path.
Version Compatibility Table
| Version | Status |
|---|---|
| 5.5.0 | Fixed |
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.