Foundations

Proxy (Forward & Reverse)

● Beginner ⏱ 12 min read network

The word “proxy” means acting on behalf of someone else. In networking, a proxy server intercepts traffic between two parties — either on behalf of the client (forward proxy) or on behalf of the server (reverse proxy). These two concepts are often confused, but they solve fundamentally different problems. Understanding them clearly is essential for system design: reverse proxies appear in virtually every production architecture.

What Is a Proxy?

A proxy server is an intermediary that sits between a client and a server, forwarding requests and responses between them. Rather than the client communicating directly with the destination server, all traffic passes through the proxy.

The proxy’s position and purpose determine which type it is. A forward proxy sits in front of clients and acts on their behalf. A reverse proxy sits in front of servers and acts on their behalf. The destination server may or may not know a proxy is involved, depending on the type.

💡
Proxy vs Gateway

The terms are sometimes used interchangeably, but there is a distinction. A proxy operates at the application layer (Layer 7) and understands the protocol (HTTP, SMTP). A gateway translates between protocols or network segments. In practice, most “proxies” in system design discussions are application-layer reverse proxies.

Forward Proxy

A forward proxy acts on behalf of clients. Clients are configured (or forced) to route their requests through the proxy. The destination server receives requests originating from the proxy’s IP address, not the client’s. From the server’s perspective, it is talking to the proxy.

How It Works

  1. The client sends a request to the forward proxy (e.g., GET http://example.com/).
  2. The proxy evaluates the request against its policy (allow/deny, caching, logging).
  3. If permitted, the proxy forwards the request to example.com using its own IP address.
  4. The response travels back through the proxy to the client.

The origin server sees the proxy’s IP, not the real client’s. This is the basis of the forward proxy’s anonymity and access-control properties.

Use Cases

Content Filtering & Access Control

Corporate networks route employee internet traffic through a forward proxy that blocks access to disallowed sites, logs all requests, and enforces security policies. The proxy is the chokepoint through which all outbound traffic must pass.

Anonymity & Privacy

A forward proxy hides the client’s real IP address from destination servers. VPNs and tools like Tor use proxy-like mechanisms to anonymize traffic. The origin server cannot distinguish between requests from different clients behind the same proxy.

Caching

Shared forward proxies (common in university networks and ISPs) cache popular content. When many clients request the same resource, the proxy serves it from its cache rather than fetching it from the origin each time. This reduces bandwidth consumption and improves response times.

Geo-Restriction Bypass

A forward proxy located in a different country allows clients to access content that is geo-restricted in their own region. The destination server sees the proxy’s country, not the client’s.

Client Must Be Configured

Forward proxies require client cooperation — either explicit browser/OS proxy settings, or a network-level transparent proxy that intercepts traffic without client configuration. In corporate environments, transparent proxies are common so policies apply even to uncooperative clients.

Reverse Proxy

A reverse proxy acts on behalf of servers. Clients send requests to what they believe is the origin server, but in reality they are talking to the proxy. The proxy then forwards requests to one or more backend servers. Clients have no visibility into the backend topology — they see only the reverse proxy’s address.

How It Works

  1. The client resolves api.example.com via DNS — the DNS record points to the reverse proxy.
  2. The client sends an HTTP request to the reverse proxy.
  3. The reverse proxy selects a backend server (using a load balancing algorithm), forwards the request, and waits for the response.
  4. The response travels back through the reverse proxy to the client.

The client never directly contacts a backend server. All it knows is the reverse proxy’s address. Backend servers can be added, removed, or replaced without clients being aware of any change.

Use Cases

Load Balancing

The most common use case. The reverse proxy distributes incoming requests across a pool of backend servers using algorithms like round-robin, least connections, or IP hash. This is covered in detail in the Load Balancers guide.

SSL/TLS Termination

The reverse proxy handles TLS encryption and decryption at the edge. Backend servers receive plain HTTP requests and never touch TLS — they don’t need certificates, and they save CPU cycles from cryptographic operations. The proxy maintains a single certificate (or a small set of wildcard/SAN certificates) for all domains it serves.

Caching

A reverse proxy can cache responses from backend servers and serve them directly for subsequent identical requests. This dramatically reduces backend load for read-heavy workloads. nginx and Varnish are frequently deployed purely as caching reverse proxies.

Request Routing & Path-Based Routing

A single reverse proxy can route traffic to different backend services based on the URL path, hostname, or request headers. /api/* routes to the API service, /static/* routes to a file server, and app.example.com routes to the web app — all behind one IP address.

Authentication & Authorization

The reverse proxy can enforce authentication before requests reach backend services. If a request lacks a valid session cookie or JWT, the proxy returns a 401 or redirects to the login page. Backend services don’t need to implement auth themselves — they trust that the proxy has already verified the caller.

Rate Limiting

Throttling at the proxy layer protects backend services from abuse and traffic spikes. All incoming requests pass through the proxy, making it the natural place to enforce per-IP or per-token rate limits without any changes to backend code.

Compression

The reverse proxy can gzip or brotli-compress responses before sending them to clients, reducing bandwidth usage. Backend servers send uncompressed responses to the proxy (fast internal network), and the proxy compresses on the way out.

💡
Reverse Proxy vs Load Balancer

These terms overlap heavily in practice. A load balancer specifically distributes traffic across multiple backends. A reverse proxy is a broader concept — it can load balance, but also cache, terminate TLS, route, and more. Most modern reverse proxies (nginx, HAProxy, Traefik, Envoy) can do both. In system design, “reverse proxy” usually implies the full set of capabilities, while “load balancer” emphasizes traffic distribution specifically.

Forward vs Reverse

The easiest way to distinguish them is: who is the proxy acting for?

Forward ProxyReverse Proxy
Acts on behalf ofClientServer
Positioned in front ofClients (between client and internet)Servers (between internet and backends)
Who configures itClient or network adminServer/infrastructure admin
HidesClient identity from origin serverBackend topology from clients
Primary use casesAnonymity, filtering, caching, bypassLoad balancing, TLS termination, caching, routing
ExamplesCorporate proxy, VPN, Squidnginx, HAProxy, Traefik, Envoy, Cloudflare

Load Balancing & SSL Termination

These two capabilities deserve extra attention because they appear in nearly every production system.

How SSL Termination Works

When a client connects to https://api.example.com:

  1. The TLS handshake happens between the client and the reverse proxy. The proxy holds the TLS certificate for api.example.com.
  2. After the handshake, the client’s data is decrypted at the proxy.
  3. The proxy forwards plain HTTP to the backend (over a trusted internal network).
  4. The response is encrypted at the proxy and sent back to the client.

This offloads all cryptographic work from backend servers. It also means you manage TLS in one place (the proxy) rather than on every backend instance. Tools like Caddy and Traefik automate Let’s Encrypt certificate issuance and renewal at the proxy layer.

⚠️
mTLS for Internal Traffic

SSL termination at the proxy means internal traffic between the proxy and backends is unencrypted by default. In a zero-trust environment, you may want mutual TLS (mTLS) on internal connections too — the proxy re-encrypts traffic before forwarding. This adds overhead but ensures no unencrypted data travels even on your internal network.

Health Checks

A reverse proxy performing load balancing continuously checks whether backend servers are healthy. If a backend fails its health check (no response, wrong status code), the proxy removes it from the rotation and stops sending it traffic. When the backend recovers, the proxy detects it and re-adds it. This provides automatic failover without any operator intervention.

Benefits

Security

A reverse proxy is a single hardened entry point for all external traffic. Backend servers are not exposed to the internet — only the proxy’s IP is public. DDoS attacks, port scans, and exploit attempts hit the proxy rather than application servers. Backend servers can run on private network addresses with no public internet access at all.

Scalability

Adding backend capacity is transparent to clients. Bring up new backend instances, register them with the proxy, and they immediately start receiving traffic. Remove instances for maintenance or scaling down without any downtime. The proxy is the only component clients are aware of.

Centralized Cross-Cutting Concerns

Authentication, logging, rate limiting, compression, and caching are all implemented once at the proxy rather than in every backend service. Backend services stay focused on business logic. This is a key advantage in microservice architectures where you may have dozens of services — the proxy provides a consistent layer for operational concerns.

Canary Deployments & A/B Testing

By controlling how the proxy routes traffic, you can send a small percentage of requests (say 5%) to a new version of a service while the rest go to the stable version. This is a canary deployment: you validate the new version in production at low risk before shifting all traffic over.

Proxy in System Design

Proxies appear at every layer of a production system. Understanding where they live and what they do helps you reason clearly about traffic flow, security boundaries, and failure modes.

Common Proxy Components

Typical Architecture

A standard production stack layers proxies at multiple levels:

  1. CDN / Edge — caches static content globally, absorbs volumetric attacks, terminates TLS close to users.
  2. Load balancer / reverse proxy — distributes traffic across backend pods/instances, handles path-based routing to different services.
  3. Service mesh (optional) — manages east-west traffic (service-to-service), enforces mTLS, provides circuit breaking and observability.
💡
In System Design Interviews

Mention a reverse proxy early when designing any multi-instance backend. State explicitly what it handles: TLS termination, load balancing, health checks, and routing. Call out that backends are not directly internet-accessible. If the question involves authentication or rate limiting, note these can be implemented at the proxy layer. For global systems, layer a CDN in front of the reverse proxy. Interviewers expect you to know that nginx/HAProxy/Traefik/Envoy are standard choices — naming a specific tool is more impressive than a vague “proxy server.”