In our previous post, we discussed the concept of progressive delivery, a modern approach to deploying applications gradually and safely. Using progressive delivery, teams can monitor changes and performance, minimize risk, and roll back changes if necessary. One of the key strategies we discussed was Canary deployment, in which a new version of an application is first released to a small subset of users before releasing it to the entire user base.
In this blog post, we’ll look deeper at canary deployment in action using Argo Rollouts. We’ll use Testkube to automate the testing of new releases, ensuring that only stable and validated versions of your application are promoted.
Combining canary deployments with Argo Rollouts and Testkube allows for gradual feature deployment while maintaining stability and performance. Let's look at one use case to see how these tools can help you implement canary releases, which test changes on a small subset of users before rolling them out to everyone.
In this use case, we will show you how to use Argo Rollouts and Testkube to set up a progressive delivery strategy. We will release a weather app in two versions: v1, which shows the weather in Hyderabad, and v2, which shows the weather in New York.
To manage the deployment, we'll start by creating a rollout template and an analysis template. The rollout template defines the details of the application while the analysis template defines the action that will be performed when a new version of the application is deployed - in this case, a basic k6 Test Workflow.
We will start by making version v1 of the weather app available to users. When we're ready, we will go on to version v2. At this point, the experiment will automatically execute the Test Workflow you've configured. The deployment of v2 will only start if the Test Workflow is completed successfully, indicating that the new version is stable and ready. If the tests fail, the progression will stop, but version v1 will keep running, preventing any potential disruptions.
This method protects deployments by ensuring that only validated versions are rolled out, which improves dependability and user experience.
For a visual walkthrough of this tutorial, you can watch the accompanying video below before diving into the written instructions and prerequisites.
After meeting the prerequisites, you can launch a target Kubernetes cluster with a configured Testkube agent.
You can find all the files used in this blog post in our Testkube examples repo.
Argo Rollouts uses Analysis Templates to analyze if a deployment is progressing as desired. The first step is to create this template. The YAML file will be executed upon the progression's initiation, and once the analysis is complete and successful, the new version of the app will be deployed, and the traffic will be shifted.
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
name: testkube-experiment-analysis
spec:
metrics:
- name: run-testkube-workflows
provider:
job:
spec:
template:
spec:
containers:
- name: execute-testkube
image: kubeshop/testkube-cli:2.1.19
env:
- name: API_TOKEN
value: "tkcapi_4"
- name: ENVIRONMENT_ID
value: "tkcenv_8"
- name: ORGANIZATION_ID
value: "tkcorg_f"
- name: ROOT_DOMAIN
value: "testkube.io"
command:
- /bin/sh
- -c
- |
testkube set context \
--api-key ${API_TOKEN} \
--root-domain ${ROOT_DOMAIN} \
--org-id ${ORGANIZATION_ID} \
--env-id ${ENVIRONMENT_ID}
# Run the desired Testkube workflows during the experiment
testkube run tw basic-k6-workflow -f || exit 1
restartPolicy: Never
backoffLimit: 2
successCondition: "result.exitCode == 0" # Exit code 0 for success
failureCondition: "result.exitCode == 1" # Exit code 1 for failure
interval: 1m
count: 1
The following are defined in the template.yaml file:
The next step involves creating the rollout itself. The application that will be deployed will be specified in the YAML file for the Argo Rollout.
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: rollout-experiment
spec:
replicas: 2
strategy:
canary:
steps:
- setWeight: 50
- pause: {duration: 10}
# The second step is the experiment which starts a single canary pod
- experiment:
duration: 5m
templates:
- name: canary
specRef: canary
# This experiment performs its own analysis by referencing an AnalysisTemplates
# The success or failure of these runs will progress or abort the rollout respectively.
analysis:
- name: canary-experiment
templateName: testkube-experiment-analysis
- setWeight: 100
- pause: {duration: 10}
revisionHistoryLimit: 2
selector:
matchLabels:
app: rollout-experiment
template:
metadata:
labels:
app: rollout-experiment
spec:
containers:
- name: rollouts-demo
image: docker.io/atulinfracloud/weathersample:v1
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: rollout-weather-svc
spec:
selector:
app: rollout-experiment
ports:
- protocol: "TCP"
port: 80
targetPort: 5000
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rollout-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rollout-weather-svc
port:
number: 80
The following is defined by Argo Rollouts:
The rollout can be deployed by executing the following command:
kubectl apply -f argo-rollout/rollout.yaml
rollout.argoproj.io/rollout-experiment created
service/rollout-weather-svc created
ingress.networking.k8s.io/rollout-ingress created
The Argo Rollouts dashboard can be used to verify the successful deployment of the rollout.
Run the following command in a new terminal window:
kubectl argo rollouts dashboard
INFO[0000] Argo Rollouts Dashboard is now available at http://localhost:3100/rollouts
For access to the dashboard, navigate to the address provided.
To verify the application, create a tunnel in a new terminal using the `minikube tunnel` command and obtain the service URL using the `$ minikube service rollout-weather-svc --url` command`.
You will observe that it uses the v1 version of the image of the weather in Hyderabad.
Let's start the canary deployment to deploy the new version and redirect traffic.
kubectl argo rollouts set image rollout-experiment rollouts-demo=docker.io/atulinfracloud/weathersample:v2
As soon as we execute this command, the experiment begins, and we can verify its results on the Argo dashboard.
You can simultaneously check the status of the Experiments and Analysis.
$ kubectl get Experiments
NAME STATUS AGE
rollout-experiment-c8b88c86c-2-2 Running 23s
$ kubectl get AnalysisRuns
NAME STATUS AGE
rollout-experiment-c8b88c86c-2-2-canary-experiment Running 27s
During the execution of this AnalysisRun, you can verify its status by executing the command `kubectl describe AnalysisRun rollout-experiment-c8b88c86c-2-2-canary-experiment`.
We can observe that the AnalysisRun has been executed successfully. The Test Workflow was also successfully generated and executed when you access your Testkube dashboard.
Now that the AnalysisRun is complete, the Experiment's status will be successful.
Finally, if you access the application via the same URL, you will notice that the new version of the image has been deployed. It now shows the weather in New York rather than Hyderabad.
Additionally, if you check the status on the Argo Rollouts dashboard, you'll notice that all of the checks were successful, the new version of the application was deployed, and users began to observe it.
With a progressive delivery approach, canary deployments are made to roll out updates or new features incrementally, reducing risk and enabling real-time feedback collection. By initially deploying updates to a small group of users, you can test the update in production, monitor its impact, and confirm its stability before rolling it out to the larger user base. The following are a few best practices for testing in Canary deployments:
When testing in a Canary deployment, we can roll out new features or updates to a small subset of users first, closely monitoring the application before increasing traffic. This allows us to reduce risk by limiting our exposure to potential issues beforehand.
During the canary release, we need to focus on key metrics and performance indicators, like response time, error rates, CPU usage, and many others. Monitoring these metrics lets us quickly identify performance issues or system inconsistencies.
Test failures are unavoidable; your deployment strategy should ensure that they are handled seamlessly. Automated rollback approaches are particularly important in this context. Automated rollback mechanisms should be implemented to revert to the previous stable version if the canary release encounters issues. The quick rollback policy ensures minimal user disruption and a faster recovery from any unexpected issues.
Building on what we learned from the previous post on progressive delivery, we looked at using Argo Rollouts with Testkube for canary deployments. We saw how to implement Testkube as part of our rolloTitlut process to automate the testing and validation of each release stage, ensuring that only stable, fully tested versions of your application reach production.
With the hands-on example in this post, you now have a practical understanding of implementing canary deployment with automated testing to ensure safer and smoother rollouts.
Get started with Testkube today to try this example, or use one of our examples to experience the power of Testkube. If you need any help, reach out to us on Slack or contact us to set up a personalized demo.
Testkube is a test execution and orchestration framework for Kubernetes that works with any CI/CD system and testing tool you need, empowering teams to deliver on the promise of agile, efficient, and comprehensive testing programs by leveraging all the capabilities of K8s to eliminate CI/CD bottlenecks, perfecting your testing workflow. Get started with Testkube's free trial today!
Related topics: