Lesson 13
Updating applications
~4 min read
Kubernetes has a built-in rolling update mechanism that ensures zero downtime during the update process. This mechanism works by gradually replacing old pods with new ones until all pods have been updated. The beauty of this approach is that it ensures that there is no disruption of service to the end users. If a new pod fails to launch for any reason, Kubernetes will stop the rollout and leave the old ones in place, ensuring that your application remains available at all times.
Rolling update: v1 → v2
P1v1
P2v1
P3v1
This makes updating applications a simple process. As soon as we have a release ready, packaged and containerized as a Docker image we can update the deployment manifest with the new image.
Kubernetes deployments are a description of the desired state of your application so once we have updated the deployment manifest Kubernetes takes over and turns our desires into reality using the rolling update mechanism we talked about earlier.
Before we update our example application let's run:
kubectl get pods --watch
NAME READY STATUS RESTARTS AGE
nginx-deployment-685b6fc776-w8fmr 1/1 Running 0 2dNow we will get updates to our application pods if anything changes. In another terminal window update the image for the deployment:
kubectl set image deployment/nginx-deployment nginx=nginx:1.27.3
deployment.apps/nginx-deployment image updatedAs soon as you run this in the other window, you will notice that the output from the process watching the pods changes:
kubectl get pods --watch
NAME READY STATUS RESTARTS AGE
nginx-deployment-685b6fc776-w8fmr 1/1 Running 0 2d
nginx-deployment-7754fdff65-slkqh 0/1 Pending 0 0s
nginx-deployment-7754fdff65-slkqh 0/1 ContainerCreating 0 0s
nginx-deployment-7754fdff65-slkqh 1/1 Running 0 9s
nginx-deployment-685b6fc776-w8fmr 1/1 Terminating 0 2d
nginx-deployment-685b6fc776-w8fmr 0/1 Terminating 0 2dIn the output you see that a pod has been created with a new identifier: nginx-deployment-7754fdff65-slkqh and to begin with, starts in the pending state. Then the container gets created and finally the pod starts running. As soon as the new one is up and running Kubernetes starts terminating the old pod.
Finally you have an updated deployment with no downtime.
You can verify the complete status of the pods for this deployment by running:
kubectl get pods
nginx-deployment-7754fdff65-slkqh 1/1 Running 0 40sAfter updating a deployment it is good practice to annotate why you did it with some identifying cause for the change. You will see later why this is a good idea.
kubectl annotate deployment nginx-deployment kubernetes.io/change-cause="version change to 1.27" --overwrite=true
deployment.apps/nginx-deployment annotatedTo get a better feel for this let's update it again:
kubectl set image deployment/nginx-deployment nginx=nginx:1.28.0
deployment.apps/nginx-deployment image updatedAnd in the process watching the pod status new output should have been added showing the same rolling update pattern -- one new pod created, brought to running state, then the old pod terminated.
You can verify that the new pod is up and running:
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-bfcc6b74b-wqx4r 1/1 Running 0 18sFinally for posterity let's annotate this new deployment:
kubectl annotate deployment nginx-deployment kubernetes.io/change-cause="version change to 1.28" --overwrite=true
deployment.apps/nginx-deployment annotatedNow we have successfully updated our application. Let's move onto the final part -- revision histories and rollbacks.