The Fix
Expands the benchmark suite by adding schema generation benchmarks for models with custom field and model validators.
Based on closed pydantic/pydantic issue #9711 · PR/commit linked
@@ -1,9 +1,19 @@
@@ -1,9 +1,19 @@
-from typing import Dict, Generic, List, Literal, Optional, TypeVar, Union, get_origin
+from typing import Any, Dict, Generic, List, Literal, Optional, TypeVar, Union, get_origin
Option A — Apply the official fix\nExpands the benchmark suite by adding schema generation benchmarks for models with custom field and model validators.\nWhen NOT to use: This fix should not be used if existing benchmarks are sufficient for performance evaluation.\n\n
Why This Fix Works in Production
- Trigger: I wouldn't say these are necessary to close this issue, but one thing for more involved contributors to look at would be benchmarking specific parts of our…
- Mechanism: The benchmark suite lacked coverage for core schema generation in Pydantic
Why This Breaks in Prod
- The benchmark suite lacked coverage for core schema generation in Pydantic
- Production symptom (often without a traceback): I wouldn't say these are necessary to close this issue, but one thing for more involved contributors to look at would be benchmarking specific parts of our core schema generation process in `pydantic`, such as:
Proof / Evidence
- GitHub issue: #9711
- Fix PR: https://github.com/pydantic/pydantic/pull/10271
- Reproduced locally: No (not executed)
- Last verified: 2026-02-12
- Confidence: 0.70
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.72
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“Alright, I've designed some more helpful criteria here re how we can close this issue”
“Do you have a plan on what general areas we want to focus on increasing benchmark coverage? (i.e. serialization, deserialization, import, validation, etc)”
“Any of these would be great”
“This probably needs more specific acceptance criteria - when are we done with this?”
Failure Signature (Search String)
- I wouldn't say these are necessary to close this issue, but one thing for more involved contributors to look at would be benchmarking specific parts of our core schema generation
- - [ ] schema cleaning (including defs / refs simplification)
Copy-friendly signature
Failure Signature
-----------------
I wouldn't say these are necessary to close this issue, but one thing for more involved contributors to look at would be benchmarking specific parts of our core schema generation process in `pydantic`, such as:
- [ ] schema cleaning (including defs / refs simplification)
Error Message
Signature-only (no traceback captured)
Error Message
-------------
I wouldn't say these are necessary to close this issue, but one thing for more involved contributors to look at would be benchmarking specific parts of our core schema generation process in `pydantic`, such as:
- [ ] schema cleaning (including defs / refs simplification)
What Broke
Insufficient benchmark coverage may lead to performance regressions in schema generation.
Why It Broke
The benchmark suite lacked coverage for core schema generation in Pydantic
Fix Options (Details)
Option A — Apply the official fix
Expands the benchmark suite by adding schema generation benchmarks for models with custom field and model validators.
Fix reference: https://github.com/pydantic/pydantic/pull/10271
Last verified: 2026-02-12. Validate in your environment.
When NOT to Use This Fix
- This fix should not be used if existing benchmarks are sufficient for performance evaluation.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Add a CI check that diffs key outputs after upgrades (OpenAPI schema snapshots, JSON payload shapes, CLI output).
- Upgrade behind a canary and run integration tests against the canary before 100% rollout.
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.