Lesson 11
Liveness & Readiness Probes
~4 min read
Your application might be running, but is it actually healthy? Can it accept traffic? Kubernetes uses probes to continuously monitor container health and make intelligent decisions about traffic routing and restarts.
Liveness vs Readiness
Kubernetes distinguishes between two types of health:
-
Liveness probe — Is the process stuck? If the liveness probe fails repeatedly, Kubernetes restarts the container. Use this to recover from deadlocks or hung processes.
-
Readiness probe — Can the app accept traffic? If the readiness probe fails, Kubernetes removes the pod from Service endpoints but does NOT restart it. Use this for apps that need warm-up time or temporarily can't serve requests.
Probe types
Kubernetes supports three probe mechanisms:
| Type | How it works | Good for |
|---|---|---|
httpGet | Sends HTTP GET request; success = 2xx/3xx | Web applications |
tcpSocket | Attempts TCP connection | Databases, non-HTTP services |
exec | Runs a command; success = exit code 0 | Custom health checks |
Each probe has configuration parameters:
initialDelaySeconds— wait before first probeperiodSeconds— how often to probefailureThreshold— consecutive failures before actionsuccessThreshold— consecutive successes to be considered healthy
Exploring the healthy-app
Our cluster has a healthy-app deployment with both probes configured. Check its status:
kubectl get podsThe healthy-app pods show 1/1 in the READY column — both probes are passing. Let's look at the probe configuration:
kubectl describe pod <healthy-app-pod-name>In the container section, you'll see the probe definitions:
- Liveness:
http-get http://:8080/healthz— checks every 10 seconds after an initial 5-second delay - Readiness:
http-get http://:8080/ready— checks every 5 seconds after an initial 3-second delay
When readiness fails
Now look at slow-start-app:
kubectl get podsNotice it shows 0/1 in the READY column even though the STATUS is Running. The container process is alive, but the readiness probe is failing.
Check the events:
kubectl describe pod <slow-start-app-pod-name>You'll see Unhealthy warning events — the readiness probe returns a 503 status code. This means:
- The pod is not killed (liveness is fine or not configured)
- The pod is removed from Service endpoints (no traffic routed to it)
- Kubernetes keeps probing, and the pod will become ready once the probe passes
Deploying with probes
Let's deploy an application with both probes configured:
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitored-app
labels:
app: monitored-app
spec:
replicas: 1
selector:
matchLabels:
app: monitored-app
template:
metadata:
labels:
app: monitored-app
spec:
containers:
- name: monitored-app
image: nginx:1.26.2
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 3
successThreshold: 1
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
successThreshold: 1The sandbox has this manifest stored as monitored-app.yaml. Apply and verify:
kubectl apply -f monitored-app.yaml
kubectl get pods
kubectl describe pod <monitored-app-pod-name>Probes and rolling updates
Readiness probes play a critical role during rolling updates. When Kubernetes creates new pods during an update, it waits for the readiness probe to pass before routing traffic to the new pod and terminating old ones. This ensures zero-downtime deployments — if the new version is broken, it never receives traffic.
Best practices
- Always configure both probes for production workloads
- Don't use the same endpoint for liveness and readiness — readiness can be stricter
- Set appropriate delays — give your app time to start before probing
- Keep probes lightweight — they run frequently; expensive checks hurt performance
- Readiness probes should check dependencies — database connections, cache warm-up
- Liveness probes should check the process — simple health endpoint that confirms the app isn't stuck