In dynamic Kubernetes environments, even minor changes to configuration or infrastructure resources can introduce unexpected behavior in applications. Identifying the effect of such changes manually is time-consuming and prone to error, especially at scale. Without automated detection and testing, these changes may go unnoticed until they cause issues in production.
Tools like Argo Events help capture Kubernetes resource lifecycle events, such as CREATE, UPDATE, or DELETE. It is capable of detecting and responding to a wide range of cluster events. These events can then be integrated into cloud-native test orchestration platforms like Testkube, allowing teams to automatically run tests that validate the impact of changes, ensuring faster feedback and safer deployments.
In this blog, we'll demonstrate setting up this event-driven testing workflow using real-world examples. You'll see how configuration changes to Kubernetes resources, such as ConfigMaps or Secrets, automatically trigger the relevant tests, making your deployment process more reliable and efficient.
This integration creates a reactive testing ecosystem where Kubernetes resource modifications automatically trigger validation workflows. Argo Events uses EventSources to watch for specific Kubernetes API server events (ConfigMap updates, Secret rotations, Deployment changes). These events are then processed by Argo Events Sensors that can intelligently trigger a Testkube Workflow via the Testkube REST API that will execute a test.
Unlike traditional CI/CD pipelines that test code changes, this approach validates infrastructure state changes as they occur in the cluster, ensuring that your changes are tested no matter how they are performed. Testkube's cloud-native architecture allows for distributed test execution across multiple environments simultaneously. The platform can run parallel test workflows—functional tests, performance benchmarks, security scans, and compliance checks—providing comprehensive validation within minutes of a configuration change.
The integration seamlessly fits into GitOps methodologies by monitoring the actual applied state rather than just Git commits or Pull Requests. When ArgoCD or Flux applies manifests to the cluster, Argo Events detects the resulting resource changes and triggers Testkube workflows. This creates a complete feedback loop: Git commit → GitOps operator applies changes → Argo Events detects cluster state changes → Testkube validates the impact → Results feed back to development teams through Testkube Dashboard, Slack notifications, or any observability solution.
In this demo, we will monitor a Kubernetes ConfigMap for changes using Argo Events. When the ConfigMap is updated, a Sensor triggers a webhook to Testkube, which runs a predefined Test Workflow. This automated flow helps validate the impact of configuration changes instantly, ensuring application stability.
Clone the testkube-example GitHub repository and change directory to `ArgoEvents` to get demo related configurations.
For Argo Events to trigger test execution, we will first set up Argo Events on a cluster and configure it. Several Kubernetes resources must be configured as part of the integration:
kubectl create namespace argo-events
namespace/argo-events created
kubectl apply -n argo-events -f https://raw.githubusercontent.com/argoproj/argo-events/stable/manifests/install.yaml
customresourcedefinition.apiextensions.k8s.io/eventbus.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/eventsources.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/sensors.argoproj.io created
serviceaccount/argo-events-sa created
clusterrole.rbac.authorization.k8s.io/argo-events-aggregate-to-admin created
clusterrole.rbac.authorization.k8s.io/argo-events-aggregate-to-edit created
clusterrole.rbac.authorization.k8s.io/argo-events-aggregate-to-view created
clusterrole.rbac.authorization.k8s.io/argo-events-role created
clusterrolebinding.rbac.authorization.k8s.io/argo-events-binding created
configmap/argo-events-controller-config created
deployment.apps/controller-manager created
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-events/stable/manifests/install-validating-webhook.yaml
serviceaccount/argo-events-webhook-sa created
clusterrole.rbac.authorization.k8s.io/argo-events-webhook created
clusterrolebinding.rbac.authorization.k8s.io/argo-events-webhook-binding created
service/events-webhook created
deployment.apps/events-webhook created
kubectl apply -n argo-events -f https://raw.githubusercontent.com/argoproj/argo-events/stable/examples/eventbus/native.yaml
eventbus.argoproj.io/default created
apiVersion: v1
kind: ServiceAccount
metadata:
name: argo-events-sa
namespace: argo-events
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argo-events-role
namespace: default
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argo-events-rb
namespace: default
subjects:
- kind: ServiceAccount
name: argo-events-sa
namespace: argo-events
roleRef:
kind: Role
name: argo-events-role
apiGroup: rbac.authorization.k8s.io
Save the configuration in a file argo-events-sa.yaml and apply it to a cluster.
kubectl apply -f argo-events-sa.yaml
serviceaccount/argo-events-sa unchanged
role.rbac.authorization.k8s.io/argo-events-role created
rolebinding.rbac.authorization.k8s.io/argo-events-rb created
With this, the cluster is ready with Argo Events installed. We have configured a ServiceAccount and RBAC that will allow Argo Events to monitor a resource.
We will now create a ConfigMap and deploy it on the cluster, which will be monitored for changes.
apiVersion: v1
kind: ConfigMap
metadata:
name: demo-config
namespace: default
labels:
watch: "true"
component: "testkube"
data:
key: initial-value-1
Save it in a file sample-configmap.yaml and apply it to a cluster.
kubectl apply -f sample-configmap.yaml
configmap/demo-config created
In this ConfigMap, we have set a label watch: “true”, which will be used for filtering the ConfigMap on event occurrence.
In this step, verify that the Testkube Dashboard has a Test Workflow configured that will be triggered for execution when the resource update event happens.
Here we have created a Test Workflow using the examples provided by Testkube. In this Test Workflow `configmap-k6`, a k6 test will execute and store the artifacts. You can perform test using any testing tool supported by Testkube or Bring your own Test(BYOT).
We need an `EventSource` that will watch the resource like a ConfigMap for ADD, UPDATE, and DELETE events. Along with that, configure a Sensor that will trigger the Testkube workflow execution when the event happens.
apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
name: configmap-eventsource
namespace: argo-events
spec:
template:
serviceAccountName: argo-events-sa
container:
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
eventBusName: default
resource:
demo-configmap:
namespace: default
group: ""
version: v1
resource: configmaps
eventTypes:
- ADD
- UPDATE
- DELETE
filter:
labels:
- key: watch
value: "true"
Save it in a file configmap-eventsource.yaml and apply it to a cluster.
kubectl apply -f configmap-eventsource.yaml
eventsource.argoproj.io/configmap-eventsource created
In this EventSource, we have set a filter to check for Kubernetes resource ConfigMap in the default namespace that has label watch: “true” for events ADD, UPDATE, and DELETE.
apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: configmap-webhook-sensor
namespace: argo-events
spec:
template:
serviceAccountName: argo-events-sa
container:
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
eventBusName: default
dependencies:
- name: configmap-dep
eventSourceName: configmap-eventsource
eventName: demo-configmap
filters:
data:
- path: body.metadata.labels.watch
type: string
value:
- "true"
triggers:
- template:
name: testkube-webhook-trigger
conditions: "configmap-dep"
http:
url: https://api.testkube.io/organizations/tkcorg_xxxxxxxxx/environments/tkcenv_9xxxxxxx/agent/test-workflows/configmap-k6/executions
payload:
- src:
dependencyName: configmap-dep
dataKey: body.metadata.name
dest: tags.ArgoEventConfigMapUpdate
method: POST
headers:
Content-Type: application/json
secureHeaders:
- name: Authorization
valueFrom:
secretKeyRef:
name: testkube-auth-secret
key: TESTKUBE_API_TOKEN
retryStrategy:
steps: 3
duration: 10s
backoff:
duration: 6s
factor: 2
jitter: 0.1
This will create a Sensor `configmap-webhook-sensor` which:
kubectl create secret generic testkube-auth-secret --from-literal=TESTKUBE_API_TOKEN="Bearer tkcapi_xxxxxxxxxxxxxxxxxxx" -n argo-events
secret/testkube-auth-secret created
Save it in a file and apply it to a cluster.
kubectl apply -f webhook-sensor.yaml
sensor.argoproj.io/configmap-webhook-sensor configured
With the Sensor configured and connected to EventSource and Testkube webhook, the setup is now ready to respond automatically to ConfigMap changes in our cluster.
Verify using the following command that the EventSource and Sensor are running:
kubectl get eventsource,sensor -n argo-events
NAME AGE
eventsource.argoproj.io/configmap-eventsource 50m
NAME AGE
sensor.argoproj.io/configmap-webhook-sensor 10m
Update the ConfigMap and check the Testkube Dashboard for the execution of the TestWorkflow.
kubectl patch configmap demo-config -n default --type merge -p '{"data":{"dummyKey":"test-'"$(date +%s)"'"}}'
configmap/demo-config patched
The execution of the Test Workflow is triggered and completed successfully.
We added a tag in the Sensor configuration to verify that the request is coming from ArgoEvent. Check the executions to see if the tag is added to it.
Tag `ArgoEventConfigMapUpdate` is added to the execution along with the name of the ConfigMap `demo-config`. Similarly, other values related to the resource for which the event has occurred, can be added as event type, timestamp of the event, and more.
For troubleshooting, you can check the Sensor logs to verify if the Testkube webhook was triggered, as shown below:
kubectl logs -n argo-events -l sensor-name=configmap-webhook-sensor --tail=2
{"level":"info","ts":"2025-06-18T11:25:41.968855251Z","logger":"argo-events.sensor","caller":"http/http.go:193","msg":"Making a http request...","sensorName":"configmap-webhook-sensor","triggerName":"testkube-webhook-trigger","triggerType":"HTTP","url":"https://api.testkube.io/organizations/tkcorg_b8ddc820d4919590/environments/tkcenv_94cb6305570f69bd/agent/test-workflows/configmap-k6/executions"}
{"level":"info","ts":"2025-06-18T11:25:43.512504542Z","logger":"argo-events.sensor","caller":"sensors/listener.go:449","msg":"Successfully processed trigger 'testkube-webhook-trigger'","sensorName":"configmap-webhook-sensor","triggerName":"testkube-webhook-trigger","triggerType":"HTTP","triggeredBy":["configmap-dep"],"triggeredByEvents":["e09792772c454a038a7b02efeadc86b2"]}
The above logs show the Testkube webhook triggered for the TestWorkflow `configmap-k6` and has been successfully processed.
Testkube provides detailed execution logs that allow you to track the webhook execution status and generate reports. Let's look at how to monitor the workflow status of Test Workflow using the Dashboard:
Select the Test Workflows tab to view the `configmap-k6` and view the Log Output:
Testkube Dashboard gathers all the information related to the Test Workflow execution. It also provides you with the capability to process artifacts, track the resource usage for each test execution, compare the duration of execution of each test, and much more.
Integrating Argo Events with Testkube offers a powerful way to make your Kubernetes environment reactive, allowing you to trigger automated tests whenever critical resources like ConfigMaps are updated.
With Testkube, you can monitor detailed test execution metrics, including CPU, memory, network traffic and disk usage across each test run. This visibility helps optimize test performance, troubleshoot bottlenecks, and right-size your testing infrastructure in Kubernetes environments.
In this setup, Argo Events listens for changes (add/update/delete) on specific Kubernetes resources using precise label filters. When a matching event occurs, a webhook is triggered to Testkube, which then runs tests to validate whether the change introduces any regressions or issues.
Get started with Testkube now to implement this approach that provides a GitOps-friendly, event-driven testing pipeline. It bridges the gap between infrastructure changes and application reliability, making it easier to catch issues early, automate responses to changes, and ensure confidence in every deployment. By adopting this approach, teams can shift left on reliability, catch regressions earlier, and scale testing without scaling complexity.
Join the Testkube Slack community to start a conversation or read Testkube documentation to start building fault-tolerant, automated test pipelines tailored to the organisation’s infrastructure.
Testkube is a test execution and orchestration framework for Kubernetes that works with any CI/CD system and testing tool you need. It empowers teams to deliver on the promise of agile, efficient, and comprehensive testing programs by leveraging all the capabilities of K8s to eliminate CI/CD bottlenecks, perfecting your testing workflow. Get started with Testkube's free trial today.
Related topics: