The Fix
Fixes an issue with using `poll` when checking for both read and write at the same time, which was causing performance degradation in certain scenarios.
Based on closed psycopg/psycopg issue #1155 · PR/commit linked
@@ -46,6 +46,8 @@ Psycopg 3.3.0 (unreleased)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
+- Fix spurious readiness flags in some of the wait functions (:ticket:`#1141`).
+- Fix high CPU usage using the ``wait_c`` function on Windows (:ticket:`#645`).
- Fix bad data on error in binary copy (:ticket:`#1147`).
import time
import psycopg # installed with psycopg[binary]
import psycopg2
ROW_COUNT = 1
TEXT_LENGTH = 100_000_000
DATA = [(i, 'a' * TEXT_LENGTH) for i in range(ROW_COUNT)]
def measure(func):
t = time.time()
func()
print(f'{(time.time() - t):.3f}s')
def insert():
print("inserting data...")
cursor.execute("CREATE TEMPORARY TABLE tmp_table (id int, t text)")
cursor.execute(' '.join([
"INSERT INTO tmp_table VALUES",
', '.join(str(row) for row in DATA),
"RETURNING id",
]))
def fetch():
print("fetching data...")
rows = cursor.fetchall()
assert [row[0] for row in rows] == [row[0] for row in DATA]
for module in [psycopg2, psycopg]:
print('=== testing', module.__name__)
with module.connect(dbname='postgres', user='postgres', password='', host='localhost', port=5432) as conn:
with conn.cursor() as cursor:
measure(insert)
measure(fetch)
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Option A — Apply the official fix\nFixes an issue with using `poll` when checking for both read and write at the same time, which was causing performance degradation in certain scenarios.\nWhen NOT to use: This fix should not be applied if the application relies on the previous behavior of the poll function.\n\n
Why This Fix Works in Production
- Trigger: assert [row[0] for row in rows] == [row[0] for row in DATA]
- Mechanism: Performance degradation occurs due to issues with the poll function when checking for both read and write readiness
- If left unfixed, tail latency can spike under load and surface as timeouts/retries (amplifying incident impact).
Why This Breaks in Prod
- Shows up under Python 3.11.6 in real deployments (not just unit tests).
- Performance degradation occurs due to issues with the poll function when checking for both read and write readiness
- Production symptom (often without a traceback): assert [row[0] for row in rows] == [row[0] for row in DATA]
Proof / Evidence
- GitHub issue: #1155
- Fix PR: https://github.com/psycopg/psycopg/pull/1141
- Reproduced locally: No (not executed)
- Last verified: 2026-02-09
- Confidence: 0.70
- Did this fix it?: Yes (upstream fix exists)
- Own content ratio: 0.49
Discussion
High-signal excerpts from the issue thread (symptoms, repros, edge-cases).
“@dvarrazzo Tested the snippet from @adamsol on linux, where I cannot repro it. Here are my runtimes: (On linux psycopg3 is a tiny bit slower…”
“@adamsol Thx for your second example - can also repro it here with ubuntu 22, python 3.10, libpq 14 and psycopg master with postgres 17.5…”
“@adamsol Seems I was able to find the culprit for the fetch issue, but yeah - the insert issue is not addressed yet. Sadly I…”
“@adamsol thank you for the repro showing the regression on the Unix socket”
Failure Signature (Search String)
- assert [row[0] for row in rows] == [row[0] for row in DATA]
- - the READY_NONE condition also calls send everytime, although nothing can be read/written yet, maybe short circuit it with a continue (caveat: to avoid busy looping a timeout
Copy-friendly signature
Failure Signature
-----------------
assert [row[0] for row in rows] == [row[0] for row in DATA]
- the READY_NONE condition also calls send everytime, although nothing can be read/written yet, maybe short circuit it with a continue (caveat: to avoid busy looping a timeout must be set on poll)
Error Message
Signature-only (no traceback captured)
Error Message
-------------
assert [row[0] for row in rows] == [row[0] for row in DATA]
- the READY_NONE condition also calls send everytime, although nothing can be read/written yet, maybe short circuit it with a continue (caveat: to avoid busy looping a timeout must be set on poll)
Minimal Reproduction
import time
import psycopg # installed with psycopg[binary]
import psycopg2
ROW_COUNT = 1
TEXT_LENGTH = 100_000_000
DATA = [(i, 'a' * TEXT_LENGTH) for i in range(ROW_COUNT)]
def measure(func):
t = time.time()
func()
print(f'{(time.time() - t):.3f}s')
def insert():
print("inserting data...")
cursor.execute("CREATE TEMPORARY TABLE tmp_table (id int, t text)")
cursor.execute(' '.join([
"INSERT INTO tmp_table VALUES",
', '.join(str(row) for row in DATA),
"RETURNING id",
]))
def fetch():
print("fetching data...")
rows = cursor.fetchall()
assert [row[0] for row in rows] == [row[0] for row in DATA]
for module in [psycopg2, psycopg]:
print('=== testing', module.__name__)
with module.connect(dbname='postgres', user='postgres', password='', host='localhost', port=5432) as conn:
with conn.cursor() as cursor:
measure(insert)
measure(fetch)
Environment
- Python: 3.11.6
What Broke
Users experience significant delays when inserting large rows, impacting application performance.
Why It Broke
Performance degradation occurs due to issues with the poll function when checking for both read and write readiness
Fix Options (Details)
Option A — Apply the official fix
Fixes an issue with using `poll` when checking for both read and write at the same time, which was causing performance degradation in certain scenarios.
Fix reference: https://github.com/psycopg/psycopg/pull/1141
Last verified: 2026-02-09. Validate in your environment.
When NOT to Use This Fix
- This fix should not be applied if the application relies on the previous behavior of the poll function.
Verify Fix
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
Did This Fix Work in Your Case?
Quick signal helps us prioritize which fixes to verify and improve.
Prevention
- Make timeouts explicit and test them (unit + integration) to avoid silent behavior changes.
- Instrument retries (attempt count + reason) and alert on spikes to catch dependency slowdowns.
Related Issues
No related fixes found.
Sources
We don’t republish the full GitHub discussion text. Use the links above for context.