Foundations

Content Delivery Networks (CDN)

● Beginner ⏱ 15 min read network

When a user in Sydney requests an image stored on a server in Virginia, physics gets in the way. That round-trip takes roughly 200–300ms before a single byte of content is transferred. A Content Delivery Network solves this by placing copies of your content on servers located close to every major population center on earth. Instead of 200ms to Virginia, the request hits a node 5ms away in Sydney. CDNs are one of the most impactful and widely deployed optimizations in web infrastructure — and a near-universal component in any system that serves users globally.

What Is a CDN?

A Content Delivery Network (CDN) is a globally distributed network of servers — called edge nodes or Points of Presence (PoPs) — that cache and serve content from locations geographically close to end users. The CDN acts as a distributed caching layer that sits between your origin server and your users.

Without a CDN, every user — regardless of location — fetches content from your origin server. With a CDN, content is served from the nearest edge node. Users get faster responses, and your origin server handles a fraction of the traffic it would otherwise.

💡
Edge Node vs Origin

The origin is your primary server — the authoritative source of truth for your content. Edge nodes are the CDN’s distributed servers that cache content from the origin and serve it to nearby users. The origin only receives traffic when an edge node doesn’t have a cached copy.

How CDNs Work

When a user requests a URL served by a CDN, the request flow works like this:

  1. The user’s DNS lookup returns the IP address of the nearest edge node (CDN providers use Anycast routing or GeoDNS to route users to the closest PoP).
  2. The request hits the edge node. If the content is cached and not expired, the edge node returns it immediately — the origin is never contacted.
  3. If the content is not cached (cache miss), the edge node fetches it from the origin server, caches a copy, and returns it to the user. Subsequent requests for the same content from users near that edge node are served from cache.

The key insight is that popular content gets cached after the first request. Every subsequent user in that region is served from the edge — not the origin. A single cache miss pays the latency cost once; all future requests pay the low edge-node cost.

Anycast Routing

CDN providers use Anycast: the same IP address is announced from multiple PoP locations around the world. The internet’s routing infrastructure automatically delivers traffic to the topologically nearest node advertising that IP. No application-level routing logic is required — proximity routing is handled at the network layer.

Push vs Pull CDNs

There are two fundamental models for how content gets onto CDN edge nodes: push and pull.

Pull CDN

The CDN fetches content from the origin on demand — when a user requests something that isn’t cached. The first request for new content is a cache miss (slightly slower), but all subsequent requests are cache hits. Edge nodes populate themselves lazily.

Best for: Most web applications. Pull CDNs are the dominant model because they are simple to operate — you just point the CDN at your origin and configure TTLs.

Push CDN

You explicitly upload content to CDN edge nodes in advance. The CDN doesn’t fetch from origin — you push files to it. Changes require re-pushing updated files.

Best for: Large static assets that change infrequently and must be globally available from the first request — software downloads, video files, game assets.

ModelCache populationFirst-request latencyOperational complexityBest for
PullLazy (on first miss)Origin latency on missLowWeb apps, APIs, images
PushExplicit uploadAlways fastHigherLarge infrequent files, videos

CDN Caching

CDNs cache content at edge nodes based on HTTP caching headers from the origin. The most important headers are:

Cache-Control

The primary header for controlling CDN and browser caching behavior:

Cache Invalidation at the Edge

When you deploy a new version of a file, CDN edge nodes may still hold the old cached version for the remainder of its TTL. You have two options:

⚠️
Long TTLs + URL Versioning

The standard pattern for static assets is: set a very long TTL (1 year) on the CDN and browser, and use cache-busting URLs with content hashes. You get maximum cache efficiency and the ability to instantly deploy changes. Reserve short TTLs for content where you can’t control the URL (like HTML pages).

Cache Keys

By default, CDNs use the URL (and sometimes the Host header) as the cache key. Two users requesting the same URL get the same cached response. This is correct for public, non-personalized content.

For dynamic content that varies by user properties (language, device type, A/B test group), you can configure the CDN to vary the cache key by additional dimensions — usually by including specific request headers or query parameters. Be careful: a cache key that is too granular defeats the purpose of caching (many unique keys means low hit rate).

Benefits

Reduced Latency

Serving content from an edge node 5ms away vs an origin 200ms away is a 40× improvement. For users on mobile connections with higher round-trip times, the gain is even larger. Perceived page load time, Core Web Vitals scores, and conversion rates all improve.

Reduced Origin Load

A CDN with a 95% hit rate means your origin handles only 5% of the raw request volume. Traffic spikes that would overwhelm an unprotected origin are absorbed by the CDN. You can run a smaller (cheaper) origin fleet.

High Availability

CDN edge nodes continue serving cached content even if the origin is temporarily unavailable (within the cache TTL). Some CDNs support stale-while-revalidate and stale-if-error — serving cached content even after TTL expiry when the origin returns an error. This makes CDN-served content resilient to origin outages.

DDoS Protection

A CDN’s global network can absorb volumetric DDoS attacks by distributing the traffic across hundreds of PoPs. The attack traffic is spread thin and doesn’t reach your origin. Most CDN providers (Cloudflare, Akamai, AWS Shield) include DDoS mitigation as part of their service.

Security (TLS Termination)

CDNs terminate TLS at the edge, close to the user. TLS handshake latency is sensitive to round-trip time — terminating TLS at an edge node 5ms away is far faster than a full handshake to an origin 200ms away. The CDN handles certificate management and renewals. Your origin only needs to trust the CDN’s IP ranges.

Drawbacks

Cost

CDN providers charge for bandwidth, requests, and sometimes storage. For high-traffic services, CDN costs can be significant. However, these costs are usually offset by the savings in origin bandwidth and compute.

Complexity and Debugging

An extra caching layer means more places to check when content appears stale or incorrect. X-Cache: HIT / MISS headers help debug whether a response came from cache or origin. Aggressive caching can cause users to see outdated content if invalidation is not handled carefully.

Dynamic Content Limitations

CDNs are most effective for cacheable content. Highly personalized, authenticated, or rapidly changing responses have a low cache hit rate. For these, the CDN is essentially a pass-through proxy — you pay CDN costs but get little caching benefit. Some CDNs offer edge compute (Cloudflare Workers, Lambda@Edge) to run logic at the edge, which can help with partially-dynamic content.

Geographic Blind Spots

Most CDN providers have excellent coverage in North America, Europe, and East Asia. Coverage in parts of Africa, South America, and Southeast Asia varies. If your users are concentrated in an underserved region, a CDN may not provide the latency improvement you expect — benchmark with your target audience.

BenefitMechanismImpact
Lower latencyEdge nodes near users40× latency reduction possible
Origin offloadHigh cache hit rate95%+ of traffic never hits origin
High availabilityStale content served during origin failureResilience to origin outages
DDoS absorptionAttack traffic distributed across PoPsOrigin shielded from volumetric attacks
Faster TLSHandshake at nearby edge100ms+ saved on first connection

CDN in System Design

In a typical system design interview, CDNs are relevant any time you need to serve content to a geographically distributed user base. Here is how to apply them confidently:

What to Put on a CDN

What Not to Put on a CDN

CDN Architecture Pattern

The standard architecture for a globally distributed web service:

  1. Static assets are built with content-hash filenames and uploaded to CDN (S3 + CloudFront, or similar). TTL = 1 year.
  2. The HTML entry point (e.g. index.html) is served from CDN with a short TTL or no-cache (it must always be fresh to pick up new asset hash names).
  3. API requests bypass the CDN and hit origin servers (load balanced, auto-scaled). Or: public API endpoints use CDN with appropriate TTLs.
  4. Origin servers sit behind the CDN with IP allowlisting — they only accept requests from CDN IP ranges, making direct-origin attacks much harder.
💡
In System Design Interviews

Mention CDNs early when designing any system with a global audience or high read traffic for static content. State what you’re putting on the CDN and why (latency, origin offload). Be ready to discuss cache invalidation strategy (URL versioning for static assets, TTL + manual purge for dynamic content), and what happens if the CDN is unavailable (fallback to origin, stale content policy). Interviewers appreciate specificity: pull CDN with s-maxage headers is far more impressive than “we use a CDN.”