The Fix
Fixes excessive DNSResolver object churn when using multiple sessions by implementing a shared resolver management system.
Based on closed aio-libs/aiohttp issue #10847 · PR/commit linked
Production note: Most teams hit this during upgrades or environment changes. Roll out with a canary and smoke critical endpoints (health, OpenAPI/docs) before 100%.
@@ -0,0 +1,5 @@
@@ -0,0 +1,5 @@
+Implemented shared DNS resolver management to fix excessive resolver object creation
+when using multiple client sessions. The new ``_DNSResolverManager`` singleton ensures
+only one ``DNSResolver`` object is created for default configurations, significantly
/* Initialize channel to run queries, a single channel can accept unlimited
* queries */
if (ares_init_options(&channel, &options, optmask) != ARES_SUCCESS) {
printf("c-ares initialization issue\n");
return 1;
}
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Option A — Apply the official fix\nFixes excessive DNSResolver object churn when using multiple sessions by implementing a shared resolver management system.\nWhen NOT to use: This fix is not applicable if custom resolver configurations are required for each session.\n\n
Why This Fix Works in Production
- Trigger: ```python-traceback
- Mechanism: Excessive DNSResolver object creation occurs when multiple client sessions are used without a shared resolver
- If left unfixed, the same config can fail only in production (env differences), causing startup failures or partial feature outages.
Why This Breaks in Prod
- Excessive DNSResolver object creation occurs when multiple client sessions are used without a shared resolver
- Production symptom (often without a traceback): ```python-traceback
Proof / Evidence
- GitHub issue: #10847
- Fix PR: https://github.com/aio-libs/aiohttp/pull/10897
- Reproduced locally: No (not executed)
- Last verified: 2026-02-09
- Confidence: 0.70
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.68
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“Also we don't benchmark requests without a session so we likely have a blindspot here”
“https://github.com/aio-libs/aiohttp/pull/10848 As suspected <img width="658" alt="Image" src="https://github.com/user-attachments/assets/8054b766-12dc-4031-b86d-2a75c618cf1a" /> And its reading files in the event loop 🙈”
“Too many DNSResolver instances is indeed a problem for Home Assistant Supervisor too”
“Some failure modes appear to be quite severe See related discussion: https://github.com/home-assistant/core/issues/144802”
Failure Signature (Search String)
- ```python-traceback
- Some failure modes appear to be quite severe
Copy-friendly signature
Failure Signature
-----------------
```python-traceback
Some failure modes appear to be quite severe
Error Message
Signature-only (no traceback captured)
Error Message
-------------
```python-traceback
Some failure modes appear to be quite severe
Minimal Reproduction
/* Initialize channel to run queries, a single channel can accept unlimited
* queries */
if (ares_init_options(&channel, &options, optmask) != ARES_SUCCESS) {
printf("c-ares initialization issue\n");
return 1;
}
What Broke
Multiple DNSResolver instances lead to increased memory usage and potential performance degradation.
Why It Broke
Excessive DNSResolver object creation occurs when multiple client sessions are used without a shared resolver
Fix Options (Details)
Option A — Apply the official fix
Fixes excessive DNSResolver object churn when using multiple sessions by implementing a shared resolver management system.
Fix reference: https://github.com/aio-libs/aiohttp/pull/10897
Last verified: 2026-02-09. Validate in your environment.
When NOT to Use This Fix
- This fix is not applicable if custom resolver configurations are required for each session.
Verify Fix
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Capture the exact failing error string in logs and tests so you can reproduce via a minimal script.
- Pin production dependencies and upgrade only with a reproducible test that hits the failing path.
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.