status
this chapter is in active development
expect live edits and rapid iteration (except for when i am really busy with other stuff) while this material is written.
status
this chapter is in active development
expect live edits and rapid iteration (except for when i am really busy with other stuff) while this material is written.
tcp, udp, and quic each solve a different set of problems. the right choice depends on what your application needs, what infrastructure you control, and who is between you and the other end.
reliability. does every byte need to arrive? tcp or quic. occasional loss acceptable? udp.
ordering. tcp enforces total ordering across the connection. quic enforces ordering per-stream but allows independent streams. udp provides nothing.
latency sensitivity. is a stale retransmission worse than no data? real-time audio and gaming prefer loss over delay. file transfers prefer correctness.
connection setup cost. tcp+tls costs 2-3 round trips. quic costs 1 (or 0 with resumption). udp costs nothing.
multiplexing. multiple independent request-response pairs on one connection? http/2 on tcp suffers head-of-line blocking. http/3 on quic does not. a single udp socket can talk to multiple peers.
tcp is the default. use it when you need reliable ordered delivery, the other side speaks tcp (most of the internet), your infrastructure is built around tcp, or you are building internal services where sub-millisecond rtt makes handshake cost irrelevant.
tcp is mature, universally supported, and kernel-optimized. every os, every runtime, every proxy, every load balancer knows tcp. the operational cost of switching away needs to justify itself.
watch for: head-of-line blocking on multiplexed protocols (http/2), slow start penalty on short-lived connections, TIME_WAIT accumulation on high-churn servers, nagle + delayed ack adding 40ms. well-understood problems with well-known mitigations. reasons to consider alternatives, not reasons to panic.
quic makes sense for client-facing https (handshake savings and head-of-line blocking improvements), mobile clients (connection migration survives network switches), and lossy last-mile links (independent stream delivery).
quic does not make sense for datacenter east-west (microsecond rtt, handshake savings irrelevant, kernel bypass outperforms userspace quic), udp-hostile networks (enterprise firewalls, some isp networks), or when debugging maturity matters (tcpdump decodes tcp in real-time; quic requires tls key logging and wireshark).
for most web applications the practical answer: let the cdn terminate quic at the edge. cloudflare, aws cloudfront, fastly, and google cloud all handle it. your origin speaks http/2 over tcp. client-facing latency benefits without touching backend infrastructure.
raw udp fits real-time media (voip, video conferencing, live streaming), gaming (position updates need minimal latency, the engine interpolates lost packets), dns (two-packet exchange, tcp's handshake would triple the latency), and multicast (tcp is point-to-point; market data feeds, service discovery, live video to many receivers need udp).
if your udp application sends sustained traffic, implement congestion control. an app blasting udp at line rate is hostile to the network and will get rate-limited. RFC 8085 has the requirements.
some systems use multiple transports on the same connection:
webrtc uses tcp/tls for signaling (SDP exchange) and udp for media (SRTP). signaling must be reliable; media must be low-latency.
quic with unreliable datagrams (RFC 9221) extends quic to support unreliable datagrams alongside reliable streams. one connection, both modes. useful for gaming (reliable chat + unreliable position updates) or media (reliable control + unreliable frames).
tcp for bulk + udp for time-sensitive. some financial systems use tcp for order management and udp multicast for market data.
the protocol you choose must survive the path:
| concern | tcp | quic | udp |
|---|---|---|---|
| firewall traversal | universal | sometimes blocked (udp/443) | varies by port |
| nat keepalive | minutes to hours | needs keepalive (connection ids help) | 30s typical timeout |
| load balancer support | universal | connection-id routing needed | hash-based or none |
| middlebox interference | well-understood | encrypted headers resist inspection | minimal headers to inspect |
| monitoring/debugging | mature tooling | needs tls key export | basic pcap |
if you run your own load balancers, verify quic support before deploying. if your users are behind corporate firewalls, measure how often quic connections fall back to tcp. over 30% fallback and the optimization is not worth the complexity.
default to tcp. it works everywhere, everyone understands it, and the tooling is decades ahead.
use quic for client-facing https when your cdn or edge supports it. the handshake savings and head-of-line blocking improvements are real on mobile and high-latency links.
use udp for real-time media, gaming, and multicast. accept the operational cost of custom reliability and congestion control, or use a framework (webrtc, quic datagrams) that handles it.
do not use raw udp to avoid learning tcp. you will rebuild tcp badly, miss edge cases, and create something harder to debug than what you started with.