INTERNALS · CONGESTION CONTROL tcp

Reno · CUBIC · BBR

Three TCP congestion-control algorithms running over identical lossy links. Same link parameters, three very different sending strategies — sawtooth, cubic recovery, and model-based pacing.

Three answers to one question.

"How fast can I send before the network breaks?" Each algorithm answers differently:

  • Reno probes by halving on every loss — the classic AIMD sawtooth.
  • CUBIC follows a cubic curve back to its previous peak, recovering faster than Reno on high-BDP paths.
  • BBR ignores loss entirely — it models bandwidth and propagation delay, paces to the model, and re-measures both periodically.

Try this:

  1. 1Drag LOSS to 0.5%. Watch Reno's cwnd carve a sawtooth — that's AIMD on display.
  2. 2Push LOSS to 2%. Reno collapses; CUBIC holds higher; BBR barely flinches because it doesn't react to loss.
  3. 3Crank RTT to 200ms. BBR's PROBE_BW pacing remains visible; the loss-based algorithms slow their growth.
  4. 4Watch the PHASE chip on each card. CUBIC stays in cubic; Reno toggles between slow-start / congestion-avoidance; BBR cycles through startupdrainprobe-bw.

Why this matters:

Beanfield runs fiber. The TCP stack on every customer's laptop chooses one of these (or a relative). On a clean fiber link with low loss, all three look similar; the differences appear when wifi sneezes, when buffer-bloat shows up at a peering point, or when an upstream provider starts dropping at 0.5%.

Streaming services adopted BBR because it doesn't need loss to "find the rate" — it holds high throughput on paths that would have collapsed Reno.

Renoslow-startTextbooks, BSD

AIMD sawtooth — the canonical algorithm. Halves on loss, grows linearly in CA.

CWND 10
INFLIGHT 0
RTT 50ms
LOSS % 0%
CWNDpeak 1
TPUT1 pps
RTT1ms peak
CUBICslow-startLinux default since ~2.6.18

Cubic recovery toward W_max. More aggressive than Reno on high-BDP paths.

CWND 10
INFLIGHT 0
RTT 50ms
LOSS % 0%
CWNDpeak 1
TPUT1 pps
RTT1ms peak
BBRstartupGoogle · YouTube · Cloudflare

Models bottleneck bandwidth + RTprop. Reacts to delay, not loss. Cycles pacing through PROBE_BW.

CWND 10
INFLIGHT 0
RTT 50ms
LOSS % 0%
CWNDpeak 1
TPUT1 pps
RTT1ms peak

Each flow runs on its own copy of the link — there's no fairness comparison here, just isolated behaviour. Dropping LOSS to 0% gives you a clean view of each algorithm's growth phase; raising it to 1–2% reveals their backoff signatures.