CWND (Congestion Window) (CWND)

Networking

The TCP sender-side limit on how much unacknowledged data can be in flight. Controls throughput ramp-up via slow start and congestion avoidance. Critical for CDN performance on new connections.

Updated Mar 17, 2026

Full Explanation

The congestion window (CWND) is the TCP sender's internal limit on how much data it can send before waiting for acknowledgments. It's measured in segments (typically ~1460 bytes each) or bytes. Unlike the receive window (which is set by the receiver), CWND is managed entirely by the sender based on network feedback.

When a TCP connection starts, CWND begins small. The initial CWND (IW) used to be 1 segment, then 2-4 segments, and since 2013 (RFC 6928) it's been 10 segments (about 14 KB). This matters a lot for CDNs: a fresh connection can only send 14 KB before waiting for the first ACK. If your HTML page is 15 KB, it takes at least 2 round trips to deliver, even if the network has 100 Mbps of bandwidth.

Slow start doubles CWND every round trip. You start at 10 segments, get ACKs, jump to 20, then 40, 80, and so on. This exponential growth means CWND ramps up fast. But it takes several round trips to reach full speed. On a connection with 100ms RTT, it takes about 4 RTTs (400ms) to grow CWND enough to saturate a 10 Mbps link.

Once CWND hits the slow start threshold (ssthresh), TCP switches to congestion avoidance. Now CWND grows by roughly one segment per RTT instead of doubling. This linear growth is more conservative and avoids flooding the network. If a packet is lost, CWND gets cut (by half for CUBIC, by different amounts for BBR).

CDN edge servers are closer to users, which means lower RTT. Lower RTT means CWND grows faster in absolute time (doubling every 5ms vs every 100ms). This is one of the fundamental reasons CDNs make things faster: slow start converges to full throughput much quicker when the edge is nearby.

Some CDN providers increase the initial CWND beyond the standard 10 segments. Google has experimented with IW of 15-20 on their servers. This is a tradeoff: higher IW means less time in slow start, but if the network can't handle it, you get immediate packet loss and retransmission, which is worse than starting slow.

QUIC (HTTP/3) has its own congestion window that works similarly but per-connection rather than per-stream. Since QUIC eliminates head-of-line blocking, a lost packet in one stream doesn't stall CWND growth for other streams, making QUIC's congestion handling more resilient for multiplexed connections.

Examples

# Check initial CWND on Linux
ip route show | grep initcwnd
# default via 10.0.0.1 dev eth0 initcwnd 10

# Set initial CWND to 15 segments
sudo ip route change default via 10.0.0.1 dev eth0 initcwnd 15

# Monitor CWND for active connections
ss -ti | grep -A5 'ESTAB'
# cwnd:10 ssthresh:65535 rtt:5.2/0.3

# Watch CWND growth in real time
ss -tin dst :443 | grep cwnd
# Repeat to see CWND increase during transfers

# Calculate minimum time to deliver N bytes
# IW=10 segments, segment=1460 bytes, RTT=50ms
# RTT 0: send 14,600 bytes (10 segments)
# RTT 1: send 29,200 bytes (20 segments)
# RTT 2: send 58,400 bytes (40 segments)
# Total after 3 RTTs (150ms): ~102 KB

# tcpdump to observe slow start
sudo tcpdump -i eth0 -n port 443 | \
  awk '/length [0-9]/{print $1, $NF}'
# Watch packet sizes increase over time

Frequently Asked Questions

The TCP sender-side limit on how much unacknowledged data can be in flight. Controls throughput ramp-up via slow start and congestion avoidance. Critical for CDN performance on new connections.

# Check initial CWND on Linux
ip route show | grep initcwnd
# default via 10.0.0.1 dev eth0 initcwnd 10

# Set initial CWND to 15 segments
sudo ip route change default via 10.0.0.1 dev eth0 initcwnd 15

# Monitor CWND for active connections
ss -ti | grep -A5 'ESTAB'
# cwnd:10 ssthresh:65535 rtt:5.2/0.3

# Watch CWND growth in real time
ss -tin dst :443 | grep cwnd
# Repeat to see CWND increase during transfers

# Calculate minimum time to deliver N bytes
# IW=10 segments, segment=1460 bytes, RTT=50ms
# RTT 0: send 14,600 bytes (10 segments)
# RTT 1: send 29,200 bytes (20 segments)
# RTT 2: send 58,400 bytes (40 segments)
# Total after 3 RTTs (150ms): ~102 KB

# tcpdump to observe slow start
sudo tcpdump -i eth0 -n port 443 | \
  awk '/length [0-9]/{print $1, $NF}'
# Watch packet sizes increase over time

Related CDN concepts include:

  • Latency — The time delay between a request and the start of its response. For CDNs, it's …
  • TCP (TCP) — Transmission Control Protocol. The reliable, ordered, connection-oriented transport protocol underneath HTTP/1.1 and HTTP/2. TCP's three-way …
  • Throughput — The actual amount of data transferred per unit of time. Unlike bandwidth (maximum capacity), throughput …
  • BBR (Bottleneck Bandwidth and RTT) (BBR) — A congestion control algorithm developed by Google that models the network path to find optimal …
  • TCP Fast Open (TFO) — A TCP extension that allows data to be sent in the initial SYN packet for …