Jump to solution
Verify

The Fix

Fixes excessive DNSResolver object churn when using multiple sessions by implementing a shared resolver management system.

Based on closed aio-libs/aiohttp issue #10847 · PR/commit linked

Production note: Most teams hit this during upgrades or environment changes. Roll out with a canary and smoke critical endpoints (health, OpenAPI/docs) before 100%.

Jump to Verify Open PR/Commit
@@ -0,0 +1,5 @@ @@ -0,0 +1,5 @@ +Implemented shared DNS resolver management to fix excessive resolver object creation +when using multiple client sessions. The new ``_DNSResolverManager`` singleton ensures +only one ``DNSResolver`` object is created for default configurations, significantly
repro.py
/* Initialize channel to run queries, a single channel can accept unlimited * queries */ if (ares_init_options(&channel, &options, optmask) != ARES_SUCCESS) { printf("c-ares initialization issue\n"); return 1; }
verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
fix.md
Option A — Apply the official fix\nFixes excessive DNSResolver object churn when using multiple sessions by implementing a shared resolver management system.\nWhen NOT to use: This fix is not applicable if custom resolver configurations are required for each session.\n\n

Why This Fix Works in Production

  • Trigger: ```python-traceback
  • Mechanism: Excessive DNSResolver object creation occurs when multiple client sessions are used without a shared resolver
Production impact:
  • If left unfixed, the same config can fail only in production (env differences), causing startup failures or partial feature outages.

Why This Breaks in Prod

  • Excessive DNSResolver object creation occurs when multiple client sessions are used without a shared resolver
  • Production symptom (often without a traceback): ```python-traceback

Proof / Evidence

Discussion

High-signal excerpts from the issue thread (symptoms, repros, edge-cases).

“Also we don't benchmark requests without a session so we likely have a blindspot here”
@bdraco · 2025-05-09 · source
“https://github.com/aio-libs/aiohttp/pull/10848 As suspected <img width="658" alt="Image" src="https://github.com/user-attachments/assets/8054b766-12dc-4031-b86d-2a75c618cf1a" /> And its reading files in the event loop 🙈”
@bdraco · 2025-05-09 · source
“Too many DNSResolver instances is indeed a problem for Home Assistant Supervisor too”
@agners · 2025-05-13 · source
“Some failure modes appear to be quite severe See related discussion: https://github.com/home-assistant/core/issues/144802”
@bdraco · 2025-05-15 · source

Failure Signature (Search String)

  • ```python-traceback
  • Some failure modes appear to be quite severe
Copy-friendly signature
signature.txt
Failure Signature ----------------- ```python-traceback Some failure modes appear to be quite severe

Error Message

Signature-only (no traceback captured)
error.txt
Error Message ------------- ```python-traceback Some failure modes appear to be quite severe

Minimal Reproduction

repro.py
/* Initialize channel to run queries, a single channel can accept unlimited * queries */ if (ares_init_options(&channel, &options, optmask) != ARES_SUCCESS) { printf("c-ares initialization issue\n"); return 1; }

What Broke

Multiple DNSResolver instances lead to increased memory usage and potential performance degradation.

Why It Broke

Excessive DNSResolver object creation occurs when multiple client sessions are used without a shared resolver

Fix Options (Details)

Option A — Apply the official fix

Fixes excessive DNSResolver object churn when using multiple sessions by implementing a shared resolver management system.

When NOT to use: This fix is not applicable if custom resolver configurations are required for each session.

Fix reference: https://github.com/aio-libs/aiohttp/pull/10897

Last verified: 2026-02-09. Validate in your environment.

Get updates

We publish verified fixes weekly. No spam.

Subscribe

When NOT to Use This Fix

  • This fix is not applicable if custom resolver configurations are required for each session.

Verify Fix

verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.

Did This Fix Work in Your Case?

Quick signal helps us prioritize which fixes to verify and improve.

Prevention

  • Capture the exact failing error string in logs and tests so you can reproduce via a minimal script.
  • Pin production dependencies and upgrade only with a reproducible test that hits the failing path.

Related Issues

No related fixes found.

Sources

We don’t republish the full GitHub discussion text. Use the links above for context.