Skip to main content
Chrome has consistently pushed the boundaries of browser performance. One of its most impactful areas of investment is the network stack — specifically how it fetches resources from servers. The journey from HTTP/1.1 to SPDY to HTTP/2 represents a fundamental rethinking of how browsers and servers communicate.

The problem with HTTP/1.1

When you load a web page, your browser typically needs to fetch dozens of resources: HTML, CSS, JavaScript, images, fonts, and more. Under HTTP/1.1, each of those requests follows a strict pattern:
  1. Open a TCP connection (three-way handshake)
  2. Send an HTTP request
  3. Wait for the response
  4. Optionally reuse the connection for the next request
The critical constraint is head-of-line blocking: even with persistent connections, a single slow response blocks all subsequent requests on that connection. Browsers work around this by opening multiple parallel connections — typically 6 per origin — but this wastes resources and adds overhead.
HTTP/1.1 pipelining was designed to address head-of-line blocking, but poor server support and proxy compatibility issues meant it was never widely adopted.

SPDY: Chrome’s secret experiment

Google’s engineering team developed SPDY (pronounced “speedy”) as an experimental protocol to address HTTP/1.1’s limitations. Chrome shipped SPDY support as early as 2010, quietly testing it with Google’s own servers. SPDY introduced several ideas that would later become HTTP/2:
  • Multiplexing: Multiple requests could be sent over a single TCP connection simultaneously, eliminating head-of-line blocking at the HTTP layer.
  • Header compression: HTTP headers are often repetitive across requests. SPDY compressed them, reducing overhead.
  • Stream prioritization: The browser could signal which resources were more important, letting the server prioritize responses.
  • Server push: Servers could proactively send resources the browser hadn’t yet requested.
The results were compelling enough that SPDY’s core ideas were standardized as HTTP/2, published as RFC 7540 in 2015.

HTTP/2: The new standard

HTTP/2 takes everything SPDY proved out and refines it into an open standard. When you visit a modern HTTPS site, there’s a good chance you’re already using HTTP/2 — and benefiting from it without any code changes.
FeatureBehavior
Connections per originUp to 6 parallel TCP connections
Request orderingSequential per connection (head-of-line blocking)
HeadersPlain text, repeated on every request
Server-initiated responsesNot supported
FramingText-based
TLSOptional

Multiplexing

Under HTTP/2, a single TCP connection carries multiple streams simultaneously. Each stream is an independent, bidirectional sequence of frames. The browser can send 20 requests at once over one connection and receive responses in any order — the stream IDs keep everything sorted out. This eliminates the need for HTTP/1.1 tricks like domain sharding (splitting assets across multiple domains to open more parallel connections). With HTTP/2, domain sharding actually hurts performance by forcing multiple connections where one suffices.
If your site still uses domain sharding (e.g., static1.example.com, static2.example.com) for performance, you can consolidate those assets onto a single origin when serving over HTTP/2.

Header compression

Every HTTP request carries headers: Cookie, User-Agent, Accept-Encoding, and many others. Over HTTP/1.1, these headers are sent as plain text on every request — often adding 500–800 bytes of overhead per request. HTTP/2 uses HPACK compression. HPACK maintains a shared header table between client and server. After the first request, subsequent requests only transmit headers that have changed, referencing stable headers by index. For a page with 80 requests, this can save tens of kilobytes of redundant data.

Server push

With server push, the server can send resources to your browser before it even knows it needs them. For example, when a browser requests an HTML page, the server can immediately push the associated CSS and JavaScript, rather than waiting for the browser to parse the HTML and issue those requests.
Server push has nuanced trade-offs. Pushed resources that the browser already has cached are wasteful. Modern best practice often favors 103 Early Hints as a lighter-weight alternative for hinting critical resources.

Binary framing

HTTP/1.1 is a text-based protocol — requests and responses are human-readable strings. HTTP/2 uses a binary framing layer, which is more compact and far easier to parse correctly. All communication is split into small frames (HEADERS, DATA, SETTINGS, etc.), each tagged with the stream it belongs to.

What this means for you as a developer

You get most of HTTP/2’s benefits automatically as long as your server and TLS setup support it. But there are a few things worth knowing:
  • Stop domain sharding. Under HTTP/1.1, spreading assets across multiple hostnames opened more parallel connections. Under HTTP/2, this backfires — use a single origin instead.
  • Concatenation is less critical. Bundling dozens of small JS files into one large file was an HTTP/1.1 optimization. With multiplexing, many smaller files can be fetched just as efficiently, and smaller files improve cache granularity.
  • Prioritize HTTPS. HTTP/2 is technically possible over plain text, but every major browser only implements it over TLS. Enabling HTTPS is a prerequisite.
  • Check your server. Nginx, Apache, Caddy, and most CDNs support HTTP/2 — but you may need to enable it explicitly.
Chrome’s quiet experiments with SPDY were an early signal that the web’s transport layer was due for a rethink. The result — HTTP/2, and its successor HTTP/3 (which moves to UDP via QUIC) — means modern browsers can load pages dramatically faster than was possible just a decade ago.