Jump to solution
Verify

The Fix

Improves performance of the COPY operation on macOS by adjusting the flushing behavior.

Based on closed psycopg/psycopg issue #1047 · PR/commit linked

Production note: Watch p95/p99 latency and retry volume; timeouts can turn into retry storms and duplicate side-effects.

Jump to Verify Open PR/Commit
@@ -53,6 +53,7 @@ Psycopg 3.1.19 (unreleased) - Allow to specify the ``connect_timeout`` connection parameter as float (:ticket:`#796`). +- Improve COPY performance on macOS (:ticket:`#745`).
repro.py
copy_statement = f"COPY table (x, y, z) FROM STDIN" copy_connection = connections.create_connection("default") # otherwise conflicts with the query with copy_connection.cursor() as cursor, cursor.copy(copy_statement) as copy: for data in progress: copy.write_row((data.x, data.y, data.z)) copy_connection.commit() copy_connection.close()
verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
fix.md
Option A — Apply the official fix\nImproves performance of the COPY operation on macOS by adjusting the flushing behavior.\nWhen NOT to use: This fix may not be suitable for systems where performance is prioritized over memory usage.\n\n

Why This Fix Works in Production

  • Trigger: Background jobs experience excessive memory consumption when writing large datasets to PostgreSQL.
  • Mechanism: High memory usage occurs due to inefficient buffering during COPY operations
Production impact:
  • If left unfixed, tail latency can spike under load and surface as timeouts/retries (amplifying incident impact).

Why This Breaks in Prod

  • High memory usage occurs due to inefficient buffering during COPY operations
  • Production symptom (often without a traceback): Background jobs experience excessive memory consumption when writing large datasets to PostgreSQL.

Proof / Evidence

Discussion

High-signal excerpts from the issue thread (symptoms, repros, edge-cases).

“Thanks for the detailed answer! I am trying to reproduce my use case in a much simpler setup (no Django) to make it easy to…”
@martinlehoux · 2025-04-19 · source
“Unfortunately I was unable to reproduce, I'll close for now! Thanks for your initial help however”
@martinlehoux · 2025-08-27 · source
“Hello @martinlehoux We are definitely open to improvements to allow to tweak the adapter's behaviour in special cases”
@dvarrazzo · 2025-04-18 · source

Failure Signature (Search String)

  • Background jobs experience excessive memory consumption when writing large datasets to PostgreSQL.
Copy-friendly signature
signature.txt
Failure Signature ----------------- Background jobs experience excessive memory consumption when writing large datasets to PostgreSQL.

Error Message

Signature-only (no traceback captured)
error.txt
Error Message ------------- Background jobs experience excessive memory consumption when writing large datasets to PostgreSQL.

Minimal Reproduction

repro.py
copy_statement = f"COPY table (x, y, z) FROM STDIN" copy_connection = connections.create_connection("default") # otherwise conflicts with the query with copy_connection.cursor() as cursor, cursor.copy(copy_statement) as copy: for data in progress: copy.write_row((data.x, data.y, data.z)) copy_connection.commit() copy_connection.close()

What Broke

Background jobs experience excessive memory consumption when writing large datasets to PostgreSQL.

Why It Broke

High memory usage occurs due to inefficient buffering during COPY operations

Fix Options (Details)

Option A — Apply the official fix

Improves performance of the COPY operation on macOS by adjusting the flushing behavior.

When NOT to use: This fix may not be suitable for systems where performance is prioritized over memory usage.

Fix reference: https://github.com/psycopg/psycopg/pull/746

Last verified: 2026-02-09. Validate in your environment.

Get updates

We publish verified fixes weekly. No spam.

Subscribe

When NOT to Use This Fix

  • This fix may not be suitable for systems where performance is prioritized over memory usage.

Verify Fix

verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.

Did This Fix Work in Your Case?

Quick signal helps us prioritize which fixes to verify and improve.

Prevention

  • Capture the exact failing error string in logs and tests so you can reproduce via a minimal script.
  • Pin production dependencies and upgrade only with a reproducible test that hits the failing path.

Related Issues

No related fixes found.

Sources

We don’t republish the full GitHub discussion text. Use the links above for context.