With the advent of CI/CD tooling and workflows it felt natural to use CI/CD for running tests also. Testing is part of the software delivery life cycle after all, and automating test executions as part of builds and deployments makes sense at a conceptual level. Unfortunately though, many CI/CD tools put very little emphasis on the specific needs of testing and QA. To them testing is just another task to run in the pipeline, which often makes additional testing support in CI/CD tools feel more like an afterthought than a primary objective.
Add in the common scenario where multiple CI/CD tools are used within the same organisation: Jenkins for building your Java microservices backend, GitHub actions for building (and deploying?) your frontend applications and maybe even something like Argo for adopting a GitOps approach to deploying your applications to Kubernetes. Not only is testing often an afterthought, that afterthought is now spread across multiple tools! What can go wrong?
Let’s drill down into six specific needs of a successful test-automation strategy and how relying on CI/CD tooling will often send you into the testing-swamp-of-no-return (TSONR — this is where you read it first!).
One of the last things you want to hear at the end of the day is “Our CI/CD tool doesn’t support your testing framework” or “We can’t run multiple versions of
Many CI/CD tools rely on plugins to support a specific testing tool / version — not a guarantee for consistency. Their fallback is usually some kind of scripting environment, which might do the job but adds complexity and maintenance overhead, making it hard to scale and diversify testing efforts.
Running the same set of tests should give consistent results, obviously. Unfortunately though, running tests in a multi-CI/CD tooling environment often results in results that vary depending on where (and how) you run them. Different CI/CD tools have different runtimes, environments and infrastructure, making it hard to predict consistency of your testing efforts, especially when it comes to non-functional tests like performance, security and compliance testing. Add to this that tests run locally during development are often run “manually” directly with the corresponding testing tool, which is usually far from a testing or production environment.
The possibility to run tests outside your CI/CD pipelines, both manually (for example load-tests) or in response to other system events (such as a Kubernetes event) is a must in a distributed and diversified infrastructure to ensure that both DevOps and QA teams can (re)run tests whenever needed.
CI/CD tooling will rarely have dedicated functionality that caters to either of these needs in the context of test execution. They might allow you to launch different “workers,” but all logic beyond that in regard to your testing tool at hand will have to be managed by custom scripts and/or third-party solutions.
Unfortunately, most CI/CD tools have little inherent knowledge about test results at a higher level. They might make it easy to see the log/artifact output of each individual test, but aggregating quality metrics such as pass/fail ratios and execution numbers across all your testing tools is not their concern, and providing you with an easy way to access specific test-execution results and artifacts for in-depth troubleshooting of failed tests will often require you to do a fair amount of scripting yourself or export these to external tooling for deeper analysis.
Once test automation is handed over to the team(s) managing CI/CD pipelines, QA often has little control or insight into that automation, which can considerably slow down the evolution of testing in your CI/CD pipelines.
CI/CD tooling rarely has the role-based access control granularity required to give testers access to just the testing aspects of build pipelines, so QA-initiated improvements/changes related to test execution often need to go through a tedious process before they get implemented, causing everything from frustration within teams to lacking test coverage.
Alternatively QA is given access to areas of the build infrastructure they should not have, which could introduce security concerns in a more regulated organization.
But how do address all these challenges and decouple test execution from your CI/CD pipelines without sacrificing the value of testing in CI/CD itself?
Testkube is a test-orchestration platform for CI/CD specifically built to solve the above problems (and more):
Testkube always runs tests in your own infrastructure, helping you manage both costs and security aspects of test executions. The Testkube Dashboard can either be hosted in the cloud or run on-prem (air-gapped if needed), giving you both easy-to-start and security-compliant alternatives going from an evaluation to production setup.
If you’re using at least one CI/CD tool in your organization, you could look into creating micro-pipelines specific to testing, and call/reuse those from your existing build pipelines. This will possibly help you with points 3, 5 and 6 above
Unfortunately the level of support for the other points will vary greatly depending on the CI/CD tool you are using, and how much effort/time you are willing to put into custom script authoring/maintenance.
Testkube is a test execution and orchestration framework for Kubernetes that works with any CI/CD system and testing tool you need, empowering teams to deliver on the promise of agile, efficient, and comprehensive testing programs by leveraging all the capabilities of K8s to eliminate CI/CD bottlenecks, perfecting your testing workflow. Get started with Testkube's free trial today!
Related topics: