Proxy (Forward & Reverse)
The word “proxy” means acting on behalf of someone else. In networking, a proxy server intercepts traffic between two parties — either on behalf of the client (forward proxy) or on behalf of the server (reverse proxy). These two concepts are often confused, but they solve fundamentally different problems. Understanding them clearly is essential for system design: reverse proxies appear in virtually every production architecture.
What Is a Proxy?
A proxy server is an intermediary that sits between a client and a server, forwarding requests and responses between them. Rather than the client communicating directly with the destination server, all traffic passes through the proxy.
The proxy’s position and purpose determine which type it is. A forward proxy sits in front of clients and acts on their behalf. A reverse proxy sits in front of servers and acts on their behalf. The destination server may or may not know a proxy is involved, depending on the type.
The terms are sometimes used interchangeably, but there is a distinction. A proxy operates at the application layer (Layer 7) and understands the protocol (HTTP, SMTP). A gateway translates between protocols or network segments. In practice, most “proxies” in system design discussions are application-layer reverse proxies.
Forward Proxy
A forward proxy acts on behalf of clients. Clients are configured (or forced) to route their requests through the proxy. The destination server receives requests originating from the proxy’s IP address, not the client’s. From the server’s perspective, it is talking to the proxy.
How It Works
- The client sends a request to the forward proxy (e.g.,
GET http://example.com/). - The proxy evaluates the request against its policy (allow/deny, caching, logging).
- If permitted, the proxy forwards the request to
example.comusing its own IP address. - The response travels back through the proxy to the client.
The origin server sees the proxy’s IP, not the real client’s. This is the basis of the forward proxy’s anonymity and access-control properties.
Use Cases
Content Filtering & Access Control
Corporate networks route employee internet traffic through a forward proxy that blocks access to disallowed sites, logs all requests, and enforces security policies. The proxy is the chokepoint through which all outbound traffic must pass.
Anonymity & Privacy
A forward proxy hides the client’s real IP address from destination servers. VPNs and tools like Tor use proxy-like mechanisms to anonymize traffic. The origin server cannot distinguish between requests from different clients behind the same proxy.
Caching
Shared forward proxies (common in university networks and ISPs) cache popular content. When many clients request the same resource, the proxy serves it from its cache rather than fetching it from the origin each time. This reduces bandwidth consumption and improves response times.
Geo-Restriction Bypass
A forward proxy located in a different country allows clients to access content that is geo-restricted in their own region. The destination server sees the proxy’s country, not the client’s.
Forward proxies require client cooperation — either explicit browser/OS proxy settings, or a network-level transparent proxy that intercepts traffic without client configuration. In corporate environments, transparent proxies are common so policies apply even to uncooperative clients.
Reverse Proxy
A reverse proxy acts on behalf of servers. Clients send requests to what they believe is the origin server, but in reality they are talking to the proxy. The proxy then forwards requests to one or more backend servers. Clients have no visibility into the backend topology — they see only the reverse proxy’s address.
How It Works
- The client resolves
api.example.comvia DNS — the DNS record points to the reverse proxy. - The client sends an HTTP request to the reverse proxy.
- The reverse proxy selects a backend server (using a load balancing algorithm), forwards the request, and waits for the response.
- The response travels back through the reverse proxy to the client.
The client never directly contacts a backend server. All it knows is the reverse proxy’s address. Backend servers can be added, removed, or replaced without clients being aware of any change.
Use Cases
Load Balancing
The most common use case. The reverse proxy distributes incoming requests across a pool of backend servers using algorithms like round-robin, least connections, or IP hash. This is covered in detail in the Load Balancers guide.
SSL/TLS Termination
The reverse proxy handles TLS encryption and decryption at the edge. Backend servers receive plain HTTP requests and never touch TLS — they don’t need certificates, and they save CPU cycles from cryptographic operations. The proxy maintains a single certificate (or a small set of wildcard/SAN certificates) for all domains it serves.
Caching
A reverse proxy can cache responses from backend servers and serve them directly for subsequent identical requests. This dramatically reduces backend load for read-heavy workloads. nginx and Varnish are frequently deployed purely as caching reverse proxies.
Request Routing & Path-Based Routing
A single reverse proxy can route traffic to different backend services based on the URL path, hostname, or request headers. /api/* routes to the API service, /static/* routes to a file server, and app.example.com routes to the web app — all behind one IP address.
Authentication & Authorization
The reverse proxy can enforce authentication before requests reach backend services. If a request lacks a valid session cookie or JWT, the proxy returns a 401 or redirects to the login page. Backend services don’t need to implement auth themselves — they trust that the proxy has already verified the caller.
Rate Limiting
Throttling at the proxy layer protects backend services from abuse and traffic spikes. All incoming requests pass through the proxy, making it the natural place to enforce per-IP or per-token rate limits without any changes to backend code.
Compression
The reverse proxy can gzip or brotli-compress responses before sending them to clients, reducing bandwidth usage. Backend servers send uncompressed responses to the proxy (fast internal network), and the proxy compresses on the way out.
These terms overlap heavily in practice. A load balancer specifically distributes traffic across multiple backends. A reverse proxy is a broader concept — it can load balance, but also cache, terminate TLS, route, and more. Most modern reverse proxies (nginx, HAProxy, Traefik, Envoy) can do both. In system design, “reverse proxy” usually implies the full set of capabilities, while “load balancer” emphasizes traffic distribution specifically.
Forward vs Reverse
The easiest way to distinguish them is: who is the proxy acting for?
| Forward Proxy | Reverse Proxy | |
|---|---|---|
| Acts on behalf of | Client | Server |
| Positioned in front of | Clients (between client and internet) | Servers (between internet and backends) |
| Who configures it | Client or network admin | Server/infrastructure admin |
| Hides | Client identity from origin server | Backend topology from clients |
| Primary use cases | Anonymity, filtering, caching, bypass | Load balancing, TLS termination, caching, routing |
| Examples | Corporate proxy, VPN, Squid | nginx, HAProxy, Traefik, Envoy, Cloudflare |
Load Balancing & SSL Termination
These two capabilities deserve extra attention because they appear in nearly every production system.
How SSL Termination Works
When a client connects to https://api.example.com:
- The TLS handshake happens between the client and the reverse proxy. The proxy holds the TLS certificate for
api.example.com. - After the handshake, the client’s data is decrypted at the proxy.
- The proxy forwards plain HTTP to the backend (over a trusted internal network).
- The response is encrypted at the proxy and sent back to the client.
This offloads all cryptographic work from backend servers. It also means you manage TLS in one place (the proxy) rather than on every backend instance. Tools like Caddy and Traefik automate Let’s Encrypt certificate issuance and renewal at the proxy layer.
SSL termination at the proxy means internal traffic between the proxy and backends is unencrypted by default. In a zero-trust environment, you may want mutual TLS (mTLS) on internal connections too — the proxy re-encrypts traffic before forwarding. This adds overhead but ensures no unencrypted data travels even on your internal network.
Health Checks
A reverse proxy performing load balancing continuously checks whether backend servers are healthy. If a backend fails its health check (no response, wrong status code), the proxy removes it from the rotation and stops sending it traffic. When the backend recovers, the proxy detects it and re-adds it. This provides automatic failover without any operator intervention.
Benefits
Security
A reverse proxy is a single hardened entry point for all external traffic. Backend servers are not exposed to the internet — only the proxy’s IP is public. DDoS attacks, port scans, and exploit attempts hit the proxy rather than application servers. Backend servers can run on private network addresses with no public internet access at all.
Scalability
Adding backend capacity is transparent to clients. Bring up new backend instances, register them with the proxy, and they immediately start receiving traffic. Remove instances for maintenance or scaling down without any downtime. The proxy is the only component clients are aware of.
Centralized Cross-Cutting Concerns
Authentication, logging, rate limiting, compression, and caching are all implemented once at the proxy rather than in every backend service. Backend services stay focused on business logic. This is a key advantage in microservice architectures where you may have dozens of services — the proxy provides a consistent layer for operational concerns.
Canary Deployments & A/B Testing
By controlling how the proxy routes traffic, you can send a small percentage of requests (say 5%) to a new version of a service while the rest go to the stable version. This is a canary deployment: you validate the new version in production at low risk before shifting all traffic over.
Proxy in System Design
Proxies appear at every layer of a production system. Understanding where they live and what they do helps you reason clearly about traffic flow, security boundaries, and failure modes.
Common Proxy Components
- Edge reverse proxy: The internet-facing entry point for all traffic. Handles TLS termination, DDoS mitigation, and global load balancing. Cloudflare, AWS CloudFront, and dedicated nginx/HAProxy instances all play this role.
- API gateway: A specialized reverse proxy for API traffic that adds authentication, rate limiting, request transformation, and routing. AWS API Gateway and Kong are purpose-built for this. Covered in detail in the API Gateway guide.
- Service mesh sidecar: In microservice architectures, a lightweight proxy (like Envoy) runs as a sidecar alongside each service. All inter-service communication goes through the sidecar, providing observability, retries, circuit breaking, and mTLS at the mesh level without any application-level changes.
Typical Architecture
A standard production stack layers proxies at multiple levels:
- CDN / Edge — caches static content globally, absorbs volumetric attacks, terminates TLS close to users.
- Load balancer / reverse proxy — distributes traffic across backend pods/instances, handles path-based routing to different services.
- Service mesh (optional) — manages east-west traffic (service-to-service), enforces mTLS, provides circuit breaking and observability.
Mention a reverse proxy early when designing any multi-instance backend. State explicitly what it handles: TLS termination, load balancing, health checks, and routing. Call out that backends are not directly internet-accessible. If the question involves authentication or rate limiting, note these can be implemented at the proxy layer. For global systems, layer a CDN in front of the reverse proxy. Interviewers expect you to know that nginx/HAProxy/Traefik/Envoy are standard choices — naming a specific tool is more impressive than a vague “proxy server.”