Jump to solution
Verify

The Fix

Upgrade to version 0.13.0 or later.

Based on closed encode/httpx issue #413 · PR/commit linked

Production note: Watch p95/p99 latency and retry volume; timeouts can turn into retry storms and duplicate side-effects.

Jump to Verify Open PR/Commit
@@ -211,6 +211,67 @@ def _detector_result(self) -> str: +class LineDecoder: + """ + Handles incrementally reading lines from text.
repro.py
import json import httpx class StreamWrapper(object): def __init__(self, stream): self._stream = stream def __iter__(self): pending = "" for chunk in self._stream: chunk = pending + chunk lines = chunk.splitlines() if chunk and lines and lines[-1] and lines[-1][-1] == chunk[-1]: pending = lines.pop() else: pending = "" for line in lines: yield line if pending: yield pending timeout = httpx.TimeoutConfig( connect_timeout=5, read_timeout=None, write_timeout=5 ) resp = httpx.get( "http://127.0.0.1:18081/api/v1/watch/namespaces/default/pods", stream=True, timeout=timeout, ) for chunk in StreamWrapper(resp.stream_text()): print(json.loads(chunk))
verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.
fix.md
Option A — Upgrade to fixed release\nUpgrade to version 0.13.0 or later.\nWhen NOT to use: This fix is not suitable for synchronous stream processing without proper handling.\n\n

Why This Fix Works in Production

  • Trigger: timeout = httpx.TimeoutConfig(
  • Mechanism: The existing API did not support iterating streams line by line, requiring a wrapper implementation
  • Why the fix works: Implemented Response.stream_lines() to support iterating stream line by line, addressing the feature request in issue #413. (first fixed release: 0.13.0).
Production impact:
  • If left unfixed, tail latency can spike under load and surface as timeouts/retries (amplifying incident impact).

Why This Breaks in Prod

  • The existing API did not support iterating streams line by line, requiring a wrapper implementation
  • Production symptom (often without a traceback): timeout = httpx.TimeoutConfig(

Proof / Evidence

  • GitHub issue: #413
  • Fix PR: https://github.com/encode/httpx/pull/575
  • First fixed release: 0.13.0
  • Reproduced locally: No (not executed)
  • Last verified: 2026-02-09
  • Confidence: 0.85
  • Did this fix it?: Yes (upstream fix exists)
  • Own content ratio: 0.55

Discussion

High-signal excerpts from the issue thread (symptoms, repros, edge-cases).

“Related to #24, #183”
@lovelydinosaur · 2019-09-30 · source
“Thanks for sharing this snippet @Hanaasagi. I think you’ll understand though that this particular piece of functionality is out of scope for HTTPX, as supported…”
@florimondmanca · 2019-09-30 · source
“Woops, clicked the wrong button”
@florimondmanca · 2019-09-30 · source
“(For your immediate need, is it an option to use the official Kubernetes Python client?) Note that your wrapping algorithm is accidentally quadratic”
@pquentin · 2019-09-30 · source

Failure Signature (Search String)

  • timeout = httpx.TimeoutConfig(
  • connect_timeout=5, read_timeout=None, write_timeout=5
Copy-friendly signature
signature.txt
Failure Signature ----------------- timeout = httpx.TimeoutConfig( connect_timeout=5, read_timeout=None, write_timeout=5

Error Message

Signature-only (no traceback captured)
error.txt
Error Message ------------- timeout = httpx.TimeoutConfig( connect_timeout=5, read_timeout=None, write_timeout=5

Minimal Reproduction

repro.py
import json import httpx class StreamWrapper(object): def __init__(self, stream): self._stream = stream def __iter__(self): pending = "" for chunk in self._stream: chunk = pending + chunk lines = chunk.splitlines() if chunk and lines and lines[-1] and lines[-1][-1] == chunk[-1]: pending = lines.pop() else: pending = "" for line in lines: yield line if pending: yield pending timeout = httpx.TimeoutConfig( connect_timeout=5, read_timeout=None, write_timeout=5 ) resp = httpx.get( "http://127.0.0.1:18081/api/v1/watch/namespaces/default/pods", stream=True, timeout=timeout, ) for chunk in StreamWrapper(resp.stream_text()): print(json.loads(chunk))

What Broke

Users needed to create custom wrappers for line-by-line stream processing, leading to increased complexity.

Why It Broke

The existing API did not support iterating streams line by line, requiring a wrapper implementation

Fix Options (Details)

Option A — Upgrade to fixed release Safe default (recommended)

Upgrade to version 0.13.0 or later.

When NOT to use: This fix is not suitable for synchronous stream processing without proper handling.

Use when you can deploy the upstream fix. It is usually lower-risk than long-lived workarounds.

Fix reference: https://github.com/encode/httpx/pull/575

First fixed release: 0.13.0

Last verified: 2026-02-09. Validate in your environment.

Get updates

We publish verified fixes weekly. No spam.

Subscribe

When NOT to Use This Fix

  • This fix is not suitable for synchronous stream processing without proper handling.

Verify Fix

verify
Re-run the minimal reproduction on your broken version, then apply the fix and re-run.

Did This Fix Work in Your Case?

Quick signal helps us prioritize which fixes to verify and improve.

Prevention

  • Add a CI check that diffs key outputs after upgrades (OpenAPI schema snapshots, JSON payload shapes, CLI output).
  • Upgrade behind a canary and run integration tests against the canary before 100% rollout.
  • Make timeouts explicit and test them (unit + integration) to avoid silent behavior changes.
  • Instrument retries (attempt count + reason) and alert on spikes to catch dependency slowdowns.

Version Compatibility Table

VersionStatus
0.13.0 Fixed

Related Issues

No related fixes found.

Sources

We don’t republish the full GitHub discussion text. Use the links above for context.