Mega Guide: How to Deploy Keycloak in a Cluster with Nginx in 10 Minutes!
Run Keycloak in HA with Nginx in front — fast to start, safe to scale. This guide shows you how to get a working cluster in minutes, then harden it for production.
Quickstart: 10-Minute Cluster Setup
Prefer Docker to get a feel? Spin up two Keycloak nodes and Nginx in front. This is perfect for local validation before you move to Kubernetes.
docker-compose.yml (minimal lab)
1version: '3.9'
2services:
3 keycloak-1:
4 image: quay.io/keycloak/keycloak:latest
5 command: >
6 start --http-enabled=true
7 --hostname-strict=false
8 --proxy-headers=xforwarded
9 --metrics-enabled=true
10 environment:
11 KC_DB: postgres
12 KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
13 KC_DB_USERNAME: keycloak
14 KC_DB_PASSWORD: keycloak
15 KEYCLOAK_ADMIN: admin
16 KEYCLOAK_ADMIN_PASSWORD: admin
17 depends_on: [postgres]
18 keycloak-2:
19 image: quay.io/keycloak/keycloak:latest
20 command: >
21 start --http-enabled=true
22 --hostname-strict=false
23 --proxy-headers=xforwarded
24 --metrics-enabled=true
25 environment:
26 KC_DB: postgres
27 KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
28 KC_DB_USERNAME: keycloak
29 KC_DB_PASSWORD: keycloak
30 KEYCLOAK_ADMIN: admin
31 KEYCLOAK_ADMIN_PASSWORD: admin
32 depends_on: [postgres]
33
34 postgres:
35 image: postgres:16
36 environment:
37 POSTGRES_DB: keycloak
38 POSTGRES_USER: keycloak
39 POSTGRES_PASSWORD: keycloak
40
41 nginx:
42 image: nginx:1.27
43 volumes:
44 - ./nginx.conf:/etc/nginx/nginx.conf:ro
45 - ./certs:/etc/nginx/certs:ro
46 ports: ["80:80", "443:443"]
47 depends_on: [keycloak-1, keycloak-2]
48
Create nginx.conf
with the config from the Nginx section, then:
1docker compose up -d && open https://localhost
Modern Keycloak (Quarkus) flags
Use --proxy-headers=xforwarded
, --http-enabled=true
, and --hostname-strict=false
for reverse-proxy setups. Legacy PROXY_ADDRESS_FORWARDING
is not needed on recent versions.
Keycloak Clustering: What & Why
Keycloak delivers SSO with OpenID Connect, OAuth 2.0, and SAML. Single node is fine until you need uptime targets, rolling upgrades, or traffic spikes. Clustering adds redundancy and scale; Infinispan provides distributed caches for sessions and tokens.
Two common paths
VM/Bare-metal: JGroups (TCP/DNS_PING/JDBC_PING). Cloud/Kubernetes: KUBE_PING via the Kubernetes API. Both work — pick the one that fits your platform and operations model.
Why Nginx for Keycloak
Performance
- Efficient TLS termination
- Low memory footprint under high concurrency
- Smart buffering to protect backends
Reliability & Security
- Hides internal topology
- Passive health checks (failover) in OSS
- Granular header control for proxy setups
If you’re in Kubernetes, an Ingress Controller (like NGINX Ingress) can play the same role. The rest of this article still applies: TLS termination, proxy headers, and session strategy matter most.
Clustering Modes (JGroups vs Kubernetes)
JGroups
Works well for static or VM-based clusters. Discovery via TCP, UDP, TCPPING, or DNS_PING. JDBC_PING is handy in container stacks that share a DB.
Heads-up
JGroups needs clean networking (ports/firewall) and coherent discovery settings. Latency between nodes impacts cache replication and session experience.
Kubernetes
Use KUBE_PING for discovery. Prefer StatefulSet (stable identities) and a headless Service for DNS. Scaling is simpler; node liveness is handled by the platform.
Service Discovery Basics
New nodes must quickly find peers, replicate cache state, and start serving requests. Discovery is your automation for that handshake.
Dynamic Scaling
- Nodes join/leave with no manual IP lists
- Faster rollouts and blue/green swaps
- Less brittle than static configs
Fault Tolerance
- Unhealthy nodes excluded automatically
- Session continuity across nodes
- Less pager noise during maintenance
Discovery Options in Practice
Common picks
- TCPPING: static IP lists (small, stable clusters)
- UDP: broadcast within a subnet (labs)
- JDBC_PING: DB-based registry (containers)
- KUBE_PING: Kubernetes Service API (cloud)
- DNS_PING: cloud DNS records (AWS/GCP/Azure)
Nginx Config (TLS, Proxy, Sticky Sessions)
Below is a safe default for OSS Nginx. It terminates TLS, forwards the client’s original headers, and uses passive health checks (max_fails
,fail_timeout
). For session affinity you can start with ip_hash
— it’s simple and effective.
1events { worker_connections 1024; }
2
3http {
4 # If in Docker, 127.0.0.11 is the embedded DNS. Otherwise, point to your resolver.
5 resolver 127.0.0.11 valid=30s ipv6=off;
6
7 upstream keycloak_http {
8 zone keycloak_http 64k;
9 ip_hash; # Simple sticky sessions
10 server keycloak-1:8080 max_fails=3 fail_timeout=10s;
11 server keycloak-2:8080 max_fails=3 fail_timeout=10s;
12 }
13
14 server {
15 listen 80;
16 server_name _;
17 return 301 https://$host$request_uri;
18 }
19
20 server {
21 listen 443 ssl http2;
22 server_name _;
23
24 ssl_certificate /etc/nginx/certs/tls.crt;
25 ssl_certificate_key /etc/nginx/certs/tls.key;
26 ssl_protocols TLSv1.2 TLSv1.3;
27 ssl_ciphers HIGH:!aNULL:!MD5;
28
29 # Health probe (passive): Nginx will mark a peer failed on errors/timeouts.
30 location = /health {
31 proxy_pass http://keycloak_http/realms/master/health;
32 proxy_set_header Host $host;
33 }
34
35 location / {
36 proxy_pass http://keycloak_http;
37 proxy_http_version 1.1;
38
39 # Forwarded headers for modern Keycloak
40 proxy_set_header Host $host;
41 proxy_set_header X-Real-IP $remote_addr;
42 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
43 proxy_set_header X-Forwarded-Proto $scheme;
44 proxy_set_header X-Forwarded-Host $host;
45 proxy_set_header X-Forwarded-Port $server_port;
46
47 # Upgrade support (WebSockets, admin console live features)
48 proxy_set_header Upgrade $http_upgrade;
49 proxy_set_header Connection $connection_upgrade;
50
51 # Buffers and timeouts — tune for your traffic profile
52 proxy_connect_timeout 5s;
53 proxy_read_timeout 60s;
54 proxy_send_timeout 60s;
55 proxy_buffering on;
56 }
57 }
58}
Why this works
- ip_hash offers basic session affinity without extra modules.
- Forwarded headers align with modern Keycloak flags (
--proxy-headers=xforwarded
). - Passive health checks remove bad peers after a few failures. (Active checks require NGINX Plus or extra tooling.)
Production Hardening Checklist
- Set
--hostname
and (if needed)--hostname-strict
appropriately; ensure external URL consistency. - Turn on
--metrics-enabled=true
and scrape with Prometheus; add alerts (session spikes, 5xx, login errors). - Use a managed DB (HA Postgres), tune connection pools, and enable DB TLS.
- Rotate admin credentials, restrict admin console by IP/VPN, and enable 2FA for admins.
- If you need stronger affinity, consider a service mesh (sticky routes) or NGINX-Plus for advanced cookies-based persistence.
- In Kubernetes: use
StatefulSet
+ headless Service; prefer NGINX Ingress with annotations for timeouts/headers.
FAQ
Do I still need PROXY_ADDRESS_FORWARDING?
Not on modern Keycloak (Quarkus). Use --proxy-headers=xforwarded
and keep your Nginx headers consistent.
Is sticky session mandatory?
Helpful, not mandatory. Infinispan replicates session data, but affinity avoids noisy hops. Start with ip_hash
; move to cookie-based persistence if you truly need it.
Active health checks in OSS Nginx?
Native active health checks are an NGINX Plus feature. In OSS, rely on passive checks or probe endpoints with external tooling and update upstreams dynamically.
Where to go next?
Explore Keycloak RBAC patterns and integrating Keycloak with Next.js.