Responsive

Automating Blue-Green Deployments with Argo Rollouts

Published
July 20, 2025
Bruno Lopes
Product Leader
Testkube

Table of Contents

Unlock Better Testing Workflows in Kubernetes — Try Testkube for Free

Subscribe to our monthly newsletter to stay up to date with all-things Testkube.

You have successfully subscribed to the Testkube newsletter.
You have successfully subscribed to the Testkube newsletter.
Oops! Something went wrong while submitting the form.
Last updated
July 19, 2025
Bruno Lopes
Product Leader
Testkube
Share on X
Share on LinkedIn
Share on Reddit
Share on HackerNews
Copy URL

Table of Contents

Blue-Green Deployment TL;DR

TL;DR

Blue-Green Deployment with Argo Rollouts & Testkube

  • 1
    Blue-Green deployment strategy runs two identical environments - Blue (current live version) and Green (new version) - allowing instant traffic switching and quick rollback if issues arise, minimizing downtime and user impact
  • 2
    Automated testing integration uses Testkube to automatically execute test workflows (like k6 performance tests) when deploying new versions, ensuring only thoroughly validated code reaches production users
  • 3
    Safe deployment progression requires all tests to pass before traffic switches from Blue to Green environment - if tests fail, the deployment stops and users continue using the stable Blue version without interruption
  • 4
    Complete setup includes Analysis Templates for test execution, Rollout Templates for deployment configuration, and proper monitoring through Argo Rollouts dashboard to track deployment status and test results
  • 5
    Best practices emphasize testing in isolation before traffic shifts, monitoring system resources during transitions, and using feature flags for gradual exposure to minimize risk during deployments

In our previous blog post, we discussed using Testkube and Argo Rollouts for Canary deployments, a deployment method frequently used to implement a phased rollout of new application versions. 

In this blog post, we look at blue-green deployments and understand how to configure it using ArgoRollouts and Testkube.

Blue-Green Deployment with Argo Rollouts and Testkube

Blue-Green deployment is a release management strategy that reduces downtime and risk by running two identical production environments (Blue and Green). The Blue environment has the application's current live version, whereas the Green environment contains the newly deployed version.

During this process, the updated application version is initially deployed in the Green environment. Once fully tested and approved, traffic is transferred from the Blue to the Green environment, bringing the new version live. If any problems arise, the traffic may readily revert to the Blue environment, allowing for a speedy recovery. This strategy minimizes user inconvenience and reduces the possibility of downtime during the release cycle.

Check out our previous blog post to learn more about Blue-Green deployment and other progressive delivery techniques.

Combining Blue-Green deployments with Argo Rollouts and Testkube provides a streamlined method for updating apps while maintaining stability and minimizing downtime. Let's examine a use case to understand how these tools can assist you in achieving Blue-Green releases, where you deploy your new version in a separate environment (Green) while maintaining live traffic in the previous version (Blue). Once the new version has passed all tests executed via Test Workflows, you can seamlessly and automatically transfer all traffic to the Green environment, ensuring a safe and regulated transition without affecting users.

Using Testkube with ArgoRollouts for Blue Green Deployments

In this scenario, we'll show you how to set up a canary deployment using Argo Rollouts and Testkube. We will deploy our weather application with two versions: v1, which will display the weather in Hyderabad, and v2, which will display the weather in New York.

We will initially establish a rollout and analysis template to facilitate the deployment process. While the rollout template sets up the application, the analysis template determines what should happen when a new version is released—in this case, a simple k6 Test Workflow.

We will start by releasing version v1 of the weather app to users. The test workflow will be automatically executed when the new version is pushed for an update. However, the deployment of v2 will only proceed if the Test Workflow is completed successfully, indicating that the new version is stable and ready for production. If the tests fail, the process will pause, but version v1 will continue to run without interruption.

This strategy improves user experience and reliability while protecting the deployment process by ensuring that only thoroughly tested versions are released.

For a visual walkthrough of this tutorial, you can watch the accompanying video below before diving into the written instructions and prerequisites.

Prerequisites

  • Get a Testkube account.
  • Kubernetes cluster - we’re using a local Minikube cluster.
  • Testkube Agent configured on the cluster.
  • Configure Argo Rollouts.
  • Make sure that a test workflow has been deployed in your cluster. In this case, we'll use a k6 test workflow, but you can create one based on your application and use case. You can also use ArgoCD to sync your Test Workflows to your cluster.

After meeting the prerequisites, you can launch a target Kubernetes cluster with a configured Testkube agent.

You can find all the files used in this blog post in our Testkube examples repo. 

Creating an Analysis Template

The first step is to create an Analysis template. When we start the progression with a new version of the image, the YAML file will be executed.

apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
 name: testkube-experiment-analysis
spec:
 metrics:
 - name: run-testkube-workflows
   provider:
     job:
       spec:
         template:
           spec:
             containers:
             - name: execute-testkube
               image: kubeshop/testkube-cli:latest
               env:
               - name: API_TOKEN
                 value: "tkcapi_4"
               - name: ENVIRONMENT_ID
                 value: "tkcenv_8"
               - name: ORGANIZATION_ID
                 value: "tkcorg_f"
               - name: ROOT_DOMAIN
                 value: "testkube.io"
               command:
               - /bin/sh
               - -c
               - |
                 testkube set context \
                   --api-key ${API_TOKEN} \
                   --root-domain ${ROOT_DOMAIN} \
                   --org-id ${ORGANIZATION_ID} \
                   --env-id ${ENVIRONMENT_ID}


                 # Run the desired Testkube workflows during the experiment
                 testkube run tw basic-k6-workflow -f || exit 1


             restartPolicy: Never
         backoffLimit: 2
   successCondition: "result.exitCode == 0"  # Exit code 0 for success
   failureCondition: "result.exitCode == 1"  # Exit code 1 for failure
   interval: 1m
   count: 1

The template.yaml file specifies the following:

  • An analysis will be conducted once the experiment is implemented as part of the rollout.
  • It defines the Testkube-related code for executing the test workflow that has already been deployed on the cluster. Configure the API keys and tokens needed to run the test workflows. To generate the API token, please refer to the API token management page.
  • Finally, we set the successCondition to result.exitCode==0. This implies that only when the Testkube Workflow is successfully completed will this analysis, experiment, and new version be deployed, and traffic progression will take place for the new version.

Creating a Rollout Template

The next step is to design a rollout.yaml for the ArgoRollout, where we'll specify which application will be deployed.

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
 name: rollout-bluegreen
spec:
 replicas: 2
 strategy:
   blueGreen:
     activeService: rollout-weather-svc
     previewService: rollout-bluegreen-preview
     autoPromotionEnabled: false
     prePromotionAnalysis:
       templates:
       - templateName: testkube-experiment-analysis
 revisionHistoryLimit: 2
 selector:
   matchLabels:
     app: rollout-bluegreen
 template:
   metadata:
     labels:
       app: rollout-bluegreen
   spec:
     containers:
     - name: rollouts-demo
       image: docker.io/atulinfracloud/weathersample:v1
       imagePullPolicy: Always
       ports:
       - containerPort: 5000


---
apiVersion: v1
kind: Service
metadata:
 name: rollout-weather-svc
spec:
 selector:
   app: rollout-bluegreen
 ports:
   - protocol: "TCP"
     port: 80
     targetPort: 5000
 type: NodePort


---
apiVersion: v1
kind: Service
metadata:
 name: rollout-bluegreen-preview
spec:
 selector:
   app: rollout-bluegreen
 ports:
   - protocol: "TCP"
     port: 80
     targetPort: 5000
 type: NodePort


---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
 name: rollout-ingress
 annotations:
   kubernetes.io/ingress.class: nginx
spec:
 rules:
 - http:
     paths:
     - path: /
       pathType: Prefix
       backend:
         service:
           name: rollout-weather-svc
           port:
             number: 80

This Argo Rollouts definition includes the following:

  • Creates a rollout with a Blue-green deployment strategy and an experiment; we also define the experiment's duration as 5 minutes, which you can adjust based on your requirements.
  • The experiment refers to the Analysis Template in which Testkube is configured.
  • It also specifies which image will be used to deploy the pod. In this case, it is “weathersample:v1”.
  • The same file also specifies the service and ingress and services required to access the application via the browser.

Initiate Blue-Green Deployment

After you have created the rollout, you can deploy it using the following command:

rollout.argoproj.io/rollout-bluegreen created
service/rollout-weather-svc created
service/rollout-bluegreen-preview created
ingress.networking.k8s.io/rollout-ingress created

After successfully deploying the rollout, you can validate it using the Argo Rollouts dashboard.

Open a new terminal window and execute the following command:

kubectl argo rollouts dashboard
INFO[0000] Argo Rollouts Dashboard is now available at http://localhost:3100/rollouts

For access to the dashboard, navigate to the address provided.

To validate the application, open a new terminal and create a tunnel with the minikube tunnel command. Then, get the URL of the service with the $ minikube service rollout-weather-svc --url command.

You will observe that it uses the v1 version of the image of the weather in Hyderabad.

Let's start the Blue-Green deployment progression to deploy the new version and redirect traffic.

kubectl argo rollouts set image rollout-bluegreen rollouts-demo=docker.io/atulinfracloud/weathersample:v2

As soon as we execute this command, the experiment begins, and we can verify its results on the Argo dashboard as well as the CLI:

Name:        	rollout-bluegreen
Namespace:   	default
Status:      	◌ Progressing
Message:     	active service cutover pending
Strategy:    	BlueGreen
Images:      	docker.io/atulinfracloud/weathersample:v1 (stable, active)
             	docker.io/atulinfracloud/weathersample:v2 (preview)
Replicas:
  Desired:   	2
  Current:   	4
  Updated:   	2
  Ready:     	2
  Available: 	2

NAME                                                                  	KIND     	STATUS     	AGE	INFO
⟳ rollout-bluegreen                                                   	Rollout  	◌ Progressing  50m    
├──# revision:14                                                                                        	 
│  ├──⧉ rollout-bluegreen-55b5fbb8cc                                  	ReplicaSet   ✔ Healthy  	47m	preview
│  │  ├──□ rollout-bluegreen-55b5fbb8cc-8pt4x                         	Pod      	✔ Running  	45s	ready:1/1
│  │  └──□ rollout-bluegreen-55b5fbb8cc-c7xbg                         	Pod      	✔ Running  	45s	ready:1/1
│  └──α rollout-bluegreen-55b5fbb8cc-14-pre                           	AnalysisRun  ◌ Running  	40s    
│ 	└──⊞ 922ce1c9-aba3-4317-8e15-6930cad22a1e.run-testkube-workflows.1  Job      	◌ Running  	40s    
├──# revision:13                                                                                        	 
│  ├──⧉ rollout-bluegreen-c94c64fdb                                   	ReplicaSet   ✔ Healthy  	50m	stable,active
│  │  ├──□ rollout-bluegreen-c94c64fdb-6cfs6                          	Pod      	✔ Running  	8m25s  ready:1/1
│  │  └──□ rollout-bluegreen-c94c64fdb-fcc88                          	Pod      	✔ Running  	8m25s  ready:1/1
│  └──α rollout-bluegreen-c94c64fdb-13-pre                            	AnalysisRun  ✔ Successful   8m19s  ✔ 1
│ 	└──⊞ 45768c69-3242-40fd-883d-106358ed7cca.run-testkube-workflows.1  Job      	✔ Successful   8m19s 

As the experiment is set up to begin when the progression begins, you can also see the status in the dashboard, as displayed below. 

Simultaneously, you can monitor the status of the experiments and analysis.

$ kubectl get Experiments
NAME                           	STATUS	AGE
rollout-bluegreen-55b5fbb8cc   Running   23s

$ kubectl get AnalysisRuns
NAME                                             	STATUS	AGE
rollout-bluegreen-55b5fbb8cc-2-pre   Running   27s

While this AnalysisRun is executing, you can check its status using the command kubectl describe AnalysisRun rollout-bluegreen-55b5fbb8cc-2-pre.

We can see that the AnalysisRun was completed successfully. If you navigate to your Testkube dashboard, you will notice that the Test Workflow was successfully created and executed.

Finally, the new version of the image will be visible if you access the application using the same URL.  It now shows the weather in New York rather than Hyderabad. 

The ArgoRollouts dashboard will indicate that all checks have been successful and the new version of the application has been deployed. Users will now begin to see this version.

When the experiment fails, the progression also fails. Below is the CLI output of when an experiment fails, leading to a failed deployment. 

Name:        	rollout-bluegreen
Namespace:   	default
Status:      	✖ Degraded
Message:     	RolloutAborted: Rollout aborted update to revision 14: Metric "run-testkube-workflows" assessed Failed due to failed (1) > failureLimit (0)
Strategy:    	BlueGreen
Images:      	docker.io/atulinfracloud/weathersample:v1 (stable, active)
Replicas:
  Desired:   	2
  Current:   	2
  Updated:   	0
  Ready:     	2
  Available: 	2

NAME                                                                  	KIND     	STATUS    	AGE	INFO
⟳ rollout-bluegreen                                                   	Rollout  	✖ Degraded	57m    
├──# revision:14                                                                                       	 
│  ├──⧉ rollout-bluegreen-55b5fbb8cc                                  	ReplicaSet   • ScaledDown  54m	preview,delay:passed
│  └──α rollout-bluegreen-55b5fbb8cc-14-pre                           	AnalysisRun  ✖ Failed  	7m37s  ✖ 1
│ 	└──⊞ 922ce1c9-aba3-4317-8e15-6930cad22a1e.run-testkube-workflows.1  Job      	✖ Failed  	7m37s  
├──# revision:13                                                                                       	 
│  ├──⧉ rollout-bluegreen-c94c64fdb                                   	ReplicaSet   ✔ Healthy 	57m	stable,active
│  │  ├──□ rollout-bluegreen-c94c64fdb-6cfs6                          	Pod      	✔ Running 	15m	ready:1/1
│  │  └──□ rollout-bluegreen-c94c64fdb-fcc88                          	Pod      	✔ Running 	15m	ready:1/1
│  └──α rollout-bluegreen-c94c64fdb-13-pre                            	AnalysisRun  ✔ Successful  15m	✔ 1
│ 	└──⊞ 45768c69-3242-40fd-883d-106358ed7cca.run-testkube-workflows.1  Job      	✔ Successful  15m	

When the analysis fails, the progression stops, and the users are presented with version 1 of the application. 

In this way, you can configure Testkube with Argo Rollouts, manage the progression of your blue-green rollouts, and validate your deployments before they are made available to everyone.

Best Practices for Testing in Blue-Green Deployments

Progressive delivery emerged as an alternative to the challenges of traditional deployment methods, which frequently involve releasing major upgrades simultaneously, increasing the chance of failing. This approach improves the overall user experience and reduces downtime and risk, making it a popular choice for enterprises looking to continuously improve their software delivery processes. Here are some best practices for testing in Blue-Green deployments using this approach:

Testing in Isolation Before Traffic Shifts

Before switching user traffic, run tests in the Green environment completely separate from production traffic. This may involve security checks, performance validations, and end-to-end tests to ensure that the new version functions properly without affecting actual users.

Monitor System Resources Before and After the Switch

Before and after the switch, monitor resources such as CPU, memory, and network usage in both Blue and Green environments. This will provide insights into any unexpected resource consumption spikes or performance degradation in the Green environment, allowing you to respond swiftly if necessary.

Use Feature Flags for Gradual Exposure

Use feature flags to limit the visibility of new features in the Green environment. This enables incremental testing by turning functionality on and off for select user groups, allowing you to ensure stability and usability without putting the entire system at risk.

Summary

Our previous blog post discussed using Argo Rollouts and Testkube to deploy a weather app on a Kubernetes cluster. This post focused on implementing Blue-Green deployments, an essential part of progressive delivery. This approach enables teams to deploy new software versions by running two identical environments: Blue, which manages live traffic, and Green, which tests the new version. Once validated, traffic flows seamlessly to Green, reducing downtime and risk.

Testkube automates the testing and validation process, ensuring the stability of new features prior to full deployment. Whether you're updating a weather app or another platform, Blue-Green deployment ensures a smooth and safe transition.

To learn more about how Testkube works with other Argo tools, check out the following tutorials: 

Get started with Testkube today to try this example, or use one of our examples to experience the power of Testkube. If you need any help, reach out to us on Slack or contact us to set up a personalized demo. 

Top 5 Blue-Green Deployment in Kubernetes FAQs

Blue-Green Deployment in Kubernetes FAQs

Essential questions about zero-downtime deployment strategies

Blue-Green Deployment is a release strategy where two identical environments (Blue and Green) are used to reduce downtime and risk during software releases. The Blue environment handles live traffic, while the Green environment hosts the new version. Once the Green version passes testing, traffic is shifted from Blue to Green.

Key characteristics of Blue-Green deployments:

  • Zero downtime: Traffic switches instantly from one environment to another
  • Risk mitigation: Easy rollback if issues are detected after deployment
  • Testing in production: Green environment runs with production data and configuration
  • Resource efficiency: Both environments share the same infrastructure resources
  • Validation phase: Comprehensive testing before traffic cutover

This strategy is particularly effective for mission-critical applications where downtime must be minimized and rollback capabilities are essential.

Argo Rollouts provides native support for Blue-Green strategies in Kubernetes by managing traffic shifting between active and preview services. It also integrates with AnalysisTemplates to define automated pre-promotion checks before routing traffic to the new version.

Key features of Argo Rollouts for Blue-Green:

  • Automated traffic management: Seamlessly switches traffic between Blue and Green services
  • AnalysisTemplates integration: Runs automated tests and health checks before promotion
  • Rollback capabilities: Quick reversion to previous version if issues are detected
  • Progressive delivery: Supports gradual traffic shifting with canary analysis
  • Custom metrics: Integration with monitoring tools like Prometheus for decision-making

Deployment workflow with Argo Rollouts:

  • Deploy new version to Green environment (preview service)
  • Run automated analysis and tests using AnalysisTemplates
  • Upon successful validation, promote Green to active service
  • Blue environment becomes the new preview for future deployments

Testkube integrates with Argo Rollouts through AnalysisTemplates to trigger test workflows (e.g., k6 tests) before promoting the new version. If tests pass, traffic is switched to the Green environment. If tests fail, the rollout is halted and the Blue version remains active.

Integration workflow:

  • Test execution: Testkube runs comprehensive test suites against the Green environment
    • Performance tests using k6 or JMeter
    • Functional tests with Cypress or Playwright
    • API tests using Postman or REST Assured
  • AnalysisTemplate configuration: Define test criteria and success thresholds
    • Configure successCondition and failureCondition
    • Set test execution timeouts and retry policies
    • Define metrics collection from test results
  • Automated decision making: Argo Rollouts processes test results and determines promotion
    • Successful tests trigger automatic promotion to Green
    • Failed tests abort the rollout and maintain Blue environment
    • Inconclusive results can trigger manual intervention

This integration ensures that only validated, high-quality releases reach production traffic.

If the Testkube analysis job fails, Argo Rollouts automatically aborts the progression to the Green environment. The application continues serving from the Blue environment, ensuring zero downtime and maintaining reliability.

Failure handling process:

  • Immediate abort: Rollout progression stops when test failure conditions are met
  • Traffic preservation: Blue environment continues handling all production traffic
  • Green environment isolation: Failed Green deployment remains isolated from traffic
  • Notification systems: Alerts are sent to development and operations teams
  • Rollout status update: Argo Rollouts marks the deployment as Degraded or Failed

Recovery options after failure:

  • Fix and retry: Address issues in Green environment and restart analysis
  • Manual promotion: Override automated checks for urgent deployments (with proper approvals)
  • Rollout abort: Completely abandon the current deployment and start fresh
  • Debug mode: Investigate failed tests in the Green environment without affecting traffic

This approach ensures that production stability is never compromised by failed deployments.

Effective testing in Blue-Green deployments requires comprehensive validation strategies and careful monitoring to ensure successful promotions:

  • Run tests in isolation on the Green environment before any traffic cutover
    • Execute full test suites against Green without impacting Blue traffic
    • Use production data mirrors for realistic testing scenarios
    • Validate database migrations and schema changes
  • Monitor CPU, memory, and network metrics before and after the switch
    • Establish baseline performance metrics from Blue environment
    • Compare Green environment resource utilization patterns
    • Set up automated alerts for resource anomalies
  • Use feature flags to gradually expose new features
    • Implement progressive feature rollouts independent of deployment
    • Enable quick feature toggles without full rollbacks
    • A/B testing capabilities for comparing feature performance
  • Automate rollback mechanisms based on test failures
    • Define clear success/failure criteria in AnalysisTemplates
    • Implement health checks and SLA monitoring
    • Configure automatic rollback triggers for critical metrics
  • Continuously observe metrics and logs for anomalies post-deployment
    • Monitor application performance indicators (APM)
    • Track business metrics and user experience KPIs
    • Implement real-time alerting for production issues

About Testkube

Testkube is a test execution and orchestration framework for Kubernetes that works with any CI/CD system and testing tool you need. It empowers teams to deliver on the promise of agile, efficient, and comprehensive testing programs by leveraging all the capabilities of K8s to eliminate CI/CD bottlenecks, perfecting your testing workflow. Get started with Testkube's free trial today.