The Fix
Fixes an infinite loop that can occur when using aiohttp in combination with async-solipsism by changing the condition from '<=' to '<'.
Based on closed aio-libs/aiohttp issue #10149 · PR/commit linked
Production note: This tends to surface only under concurrency. Reproduce with load tests and watch for lock contention/cancellation paths.
@@ -0,0 +1,4 @@
@@ -0,0 +1,4 @@
+Fixed an infinite loop that can occur when using aiohttp in combination
+with `async-solipsism`_ -- by :user:`bmerry`.
+
import asyncio
import async_solipsism
import pytest
from aiohttp import web, test_utils
@pytest.fixture
def event_loop_policy():
return async_solipsism.EventLoopPolicy()
@pytest.fixture(autouse=True)
def mock_start_connection(monkeypatch):
monkeypatch.setattr("aiohappyeyeballs.start_connection",
async_solipsism.aiohappyeyeballs_start_connection)
def socket_factory(host, port, family):
return async_solipsism.ListenSocket((host, port))
async def test_integration():
app = web.Application()
async with test_utils.TestServer(app, socket_factory=socket_factory) as server:
async with test_utils.TestClient(server) as client:
resp = await client.post("/hey", json={})
assert resp.status == 404
await asyncio.sleep(10000)
resp = await client.post("/hey", json={})
assert resp.status == 404
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Option A — Apply the official fix\nFixes an infinite loop that can occur when using aiohttp in combination with async-solipsism by changing the condition from '<=' to '<'.\nWhen NOT to use: This fix is not suitable for non-testing environments where real time is used.\n\n
Why This Fix Works in Production
- Trigger: assert resp.status == 404
- Mechanism: Fixes an infinite loop that can occur when using aiohttp in combination with async-solipsism by changing the condition from '<=' to '<'.
- If left unfixed, failures can be intermittent under concurrency (hard to reproduce; shows up as sporadic 5xx/timeouts).
Why This Breaks in Prod
- Shows up under Python 3.12.3 in real deployments (not just unit tests).
- Production symptom (often without a traceback): assert resp.status == 404
Proof / Evidence
- GitHub issue: #10149
- Fix PR: https://github.com/aio-libs/aiohttp/pull/10151
- Reproduced locally: No (not executed)
- Last verified: 2026-02-09
- Confidence: 0.70
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.56
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“Changing it from <= to < is unlikely to have a material effect on production since its in microseconds and its likely below the resolution…”
“> Changing it from <= to < is unlikely to have a material effect on production since its in microseconds and its likely below the…”
Failure Signature (Search String)
- assert resp.status == 404
- 4. Run pytest. The test hangs.
Copy-friendly signature
Failure Signature
-----------------
assert resp.status == 404
4. Run pytest. The test hangs.
Error Message
Signature-only (no traceback captured)
Error Message
-------------
assert resp.status == 404
4. Run pytest. The test hangs.
Minimal Reproduction
import asyncio
import async_solipsism
import pytest
from aiohttp import web, test_utils
@pytest.fixture
def event_loop_policy():
return async_solipsism.EventLoopPolicy()
@pytest.fixture(autouse=True)
def mock_start_connection(monkeypatch):
monkeypatch.setattr("aiohappyeyeballs.start_connection",
async_solipsism.aiohappyeyeballs_start_connection)
def socket_factory(host, port, family):
return async_solipsism.ListenSocket((host, port))
async def test_integration():
app = web.Application()
async with test_utils.TestServer(app, socket_factory=socket_factory) as server:
async with test_utils.TestClient(server) as client:
resp = await client.post("/hey", json={})
assert resp.status == 404
await asyncio.sleep(10000)
resp = await client.post("/hey", json={})
assert resp.status == 404
Environment
- Python: 3.12.3
What Broke
Tests using async_solipsism can get stuck in an infinite loop, causing timeouts.
Fix Options (Details)
Option A — Apply the official fix
Fixes an infinite loop that can occur when using aiohttp in combination with async-solipsism by changing the condition from '<=' to '<'.
Option D — Guard side-effects with OnceOnly Guardrail for side-effects
Mitigate duplicate external side-effects under retries/timeouts/agent loops by gating the operation before calling external systems.
- Place OnceOnly between your code/agent and real side-effects (Stripe, emails, CRM, APIs).
- Use a stable key per side-effect (e.g., customer_id + action + idempotency_key).
- Fail-safe: configure fail-open vs fail-closed based on blast radius and spend risk.
Show example snippet (optional)
from onceonly import OnceOnly
import os
once = OnceOnly(api_key=os.environ["ONCEONLY_API_KEY"], fail_open=True)
# Stable idempotency key per real side-effect.
# Use a request id / job id / webhook delivery id / Stripe event id, etc.
event_id = "evt_..." # replace
key = f"stripe:webhook:{event_id}"
res = once.check_lock(key=key, ttl=3600)
if res.duplicate:
return {"status": "already_processed"}
# Safe to execute the side-effect exactly once.
handle_event(event_id)
Fix reference: https://github.com/aio-libs/aiohttp/pull/10151
Last verified: 2026-02-09. Validate in your environment.
When NOT to Use This Fix
- This fix is not suitable for non-testing environments where real time is used.
- Do not use this to hide logic bugs or data corruption. Use it to block duplicate external side-effects and enforce tool permissions/spend caps.
Verify Fix
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Add a stress test that runs high-concurrency workloads and fails on thread dumps / blocked locks.
- Enable watchdog dumps in prod (faulthandler, thread dump endpoint) to capture deadlocks quickly.
- Make timeouts explicit and test them (unit + integration) to avoid silent behavior changes.
- Instrument retries (attempt count + reason) and alert on spikes to catch dependency slowdowns.
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.