>_
Published on

HTTP/1.1 vs HTTP/2: Key Differences

Written by Claude

HTTP/1.1 vs HTTP/2: Key Differences

HTTP/2 (RFC 7540) fundamentally redesigns how HTTP semantics are transmitted over the wire. While maintaining full backward compatibility with HTTP/1.1 request/response semantics, it introduces a binary framing layer that solves critical performance bottlenecks.

Core Architectural Differences

AspectHTTP/1.1HTTP/2
Protocol FormatText-basedBinary framing layer
MultiplexingSequential (head-of-line blocking)Full request/response multiplexing
ConnectionsMultiple TCP connections neededSingle connection suffices
Header CompressionNoneHPACK compression
Server PushNot supportedNative server push
Stream PrioritizationNot supportedStream dependencies and weights

1. Binary Framing Layer

HTTP/2 splits communication into frames and streams:

+-----------------------------------------------+
|                 Length (24)                   |
+---------------+---------------+---------------+
|   Type (8)    |   Flags (8)   |
+-+-------------+---------------+-------------------------------+
|R|                 Stream Identifier (31)                      |
+=+=============================================================+
|                   Frame Payload (0...)                      ...
+---------------------------------------------------------------+

Frame types:

  • HEADERS: Request/response headers
  • DATA: Payload body
  • SETTINGS: Connection configuration
  • WINDOW_UPDATE: Flow control
  • PRIORITY: Stream prioritization
  • RST_STREAM: Abnormal stream termination
  • PUSH_PROMISE: Server push notification
  • PING, GOAWAY: Connection management

HTTP/1.1 equivalent:

GET /resource HTTP/1.1\r\n
Host: example.com\r\n
\r\n

HTTP/2 equivalent (conceptual):

HEADERS frame (stream 1)
  :method: GET
  :path: /resource
  :authority: example.com
  :scheme: https

2. Multiplexing and Head-of-Line Blocking

HTTP/1.1 bottleneck:

  • Browsers open 6-8 TCP connections per domain
  • Requests on same connection serialize
  • One slow response blocks all subsequent requests
Connection 1: [Request A -------- Response A (slow) --------] [Request B]
Connection 2: [Request C --- Response C ---] [Request D --- Response D ---]

HTTP/2 solution:

  • Single TCP connection, unlimited concurrent streams
  • Interleaved frames eliminate application-level HOL blocking
Stream 1: [HEADERS] [DATA] ... [DATA] [DATA]
Stream 3:     [HEADERS] [DATA] [DATA] ...
Stream 5:           [HEADERS] [DATA] [DATA] [DATA]
All multiplexed on one connection

Important caveat: TCP-level HOL blocking still exists. Packet loss blocks all streams until retransmission completes. HTTP/3 (QUIC) solves this.

3. Header Compression (HPACK)

HTTP/1.1 sends redundant headers on every request:

GET /api/users HTTP/1.1
Host: api.example.com
User-Agent: Mozilla/5.0 ...
Accept: application/json
Authorization: Bearer eyJhbGc...
Cookie: session=abc123; tracking=xyz...

GET /api/posts HTTP/1.1
Host: api.example.com         # Same as before
User-Agent: Mozilla/5.0 ...   # Same as before
Accept: application/json      # Same as before
Authorization: Bearer eyJhbGc... # Same as before
Cookie: session=abc123; ...   # Same as before

HPACK (RFC 7541) uses:

  1. Static table (61 common headers like :method: GET)
  2. Dynamic table (connection-specific header cache)
  3. Huffman encoding for literal values
First request:
  HEADERS frame:
    :method: GET (index 2 from static table)
    :path: /api/users (literal, added to dynamic table at index 62)
    authorization: Bearer ... (literal, added at index 63)

Second request:
  HEADERS frame:
    :method: GET (index 2)
    :path: /api/posts (literal, added at index 64)
    authorization: (index 63) # Reference only, no retransmission

Typical header compression: 70-90% size reduction.

Security note: HPACK is vulnerable to timing attacks. Never compress sensitive data with user-controlled input on the same connection (CRIME attack).

4. Server Push

Server preemptively sends resources before the client requests them:

ClientServer: GET /index.html

ServerClient:
  PUSH_PROMISE (stream 2): /style.css
  PUSH_PROMISE (stream 4): /script.js

  HEADERS + DATA (stream 1): index.html content
  HEADERS + DATA (stream 2): style.css content
  HEADERS + DATA (stream 4): script.js content

Use case: Eliminate round-trips for critical resources.

Gotchas:

  • Client can reject pushes (RST_STREAM)
  • Pushed resources must be cacheable
  • Over-pushing wastes bandwidth if client already cached the resource
  • Many CDNs/browsers disabled push due to poor ROI in practice

5. Stream Prioritization

Clients can assign dependencies and weights:

Stream 3 (CSS): weight=200, depends on stream 1 (HTML)
Stream 5 (JS): weight=100, depends on stream 1
Stream 7 (image): weight=50, depends on stream 3

Server should allocate bandwidth proportionally. In practice: Most servers ignore priorities or implement them poorly. Chrome deprecated client-side prioritization in favor of server hints.

6. Flow Control

Per-stream and connection-level flow control via WINDOW_UPDATE frames:

Initial window: 65,535 bytes

Server sends 50KB on stream 1
Stream 1 window: 65,535 - 50,000 = 15,535
Connection window: 65,535 - 50,000 = 15,535

Client sends WINDOW_UPDATE(stream 1, 50,000)
Stream 1 window: 65,535

Client sends WINDOW_UPDATE(connection, 50,000)
Connection window: 65,535

Prevents fast sender from overwhelming slow receiver. HTTP/1.1 relies purely on TCP flow control.

7. Connection Management

HTTP/1.1:

Connection: keep-alive  # Reuse connection
Connection: close       # Close after response

HTTP/2:

  • Persistent by default
  • Single connection per origin
  • GOAWAY frame for graceful shutdown
  • Allows server to signal when to stop creating new streams

Performance Comparison

Typical scenarios:

MetricHTTP/1.1 (6 connections)HTTP/2 (1 connection)
Page load (100 resources)~2.5s~1.2s
Header overhead500-800 bytes/request50-200 bytes/request
TCP handshakes6 (+ 6 TLS handshakes)1 (+ 1 TLS handshake)
Latency sensitivityHigh (round trips add up)Lower (multiplexing)

When HTTP/1.1 wins:

  • Single large resource (no multiplexing benefit)
  • Packet loss environments (TCP HOL blocking worse with one connection)

Migration Considerations

Protocol negotiation:

TLS ALPN extension: h2, http/1.1
Server selects: h2

HTTP/2 requires TLS in browsers (though spec allows cleartext h2c).

Backend compatibility:

  • HTTP/2 at edge, HTTP/1.1 to origin is common
  • Proxies translate frames ↔ text transparently
  • Application code sees identical semantics

Anti-patterns to avoid:

  • Domain sharding (defeats single connection benefit)
  • Resource concatenation (breaks caching granularity)
  • Image sprites (same reason)

Still relevant:

  • Minification
  • Compression (gzip/brotli)
  • CDN/caching strategies
  • Reduce request count

Summary

HTTP/2 eliminates HTTP/1.1's fundamental performance bottlenecks through:

  1. Binary framing enables efficient parsing and multiplexing
  2. Multiplexing removes head-of-line blocking at the application layer
  3. Header compression drastically reduces overhead
  4. Single connection reduces handshake latency

Key takeaway: HTTP/2 makes the web faster by optimizing the wire protocol while preserving HTTP semantics. For further improvements addressing TCP-level limitations, see HTTP/3 (QUIC).

Further reading: