How to Run Tests as Kubernetes jobs

Oct 1, 2025
read
Katie Petriella
Senior Growth Manager
Testkube
Read more from
Katie Petriella
Katie Petriella
Senior Growth Manager
Testkube

Table of Contents

Try Testkube free. No setup needed.

Try Testkube free. No setup needed.

Subscribe to our monthly newsletter to stay up to date with all-things Testkube.

You have successfully subscribed to the Testkube newsletter.
You have successfully subscribed to the Testkube newsletter.
Oops! Something went wrong while submitting the form.
Oct 1, 2025
read
Katie Petriella
Senior Growth Manager
Testkube
Read more from
Katie Petriella
Katie Petriella
Senior Growth Manager
Testkube
Running tests as Kubernetes jobs gives you real environment parity, elastic scaling, and centralized results. Learn how to move test execution natively into your cluster.

Table of Contents

Executive Summary

Your tests are already running as containers. They just do not know it yet.

If your applications run in Kubernetes, every deployment, every service, every background task is a container being scheduled, managed, and observed by the cluster. The cluster handles resource allocation, retries, networking, secrets, and logging. That is exactly what a test execution environment needs.

Most teams are not using it that way. Tests still run as steps inside CI pipelines, on dedicated runners, or on engineer laptops before a push. The cluster is right there. The tests run somewhere else. This post is about closing that gap.

What a Kubernetes job actually is

A Kubernetes Job is a workload resource that runs a container to completion. Unlike a Deployment, which runs continuously, a Job runs once, finishes, and reports success or failure. Kubernetes handles scheduling the pod, retrying on failure up to a configured limit, and cleaning up after completion.

CronJobs extend this with a schedule: run this Job every hour, every night, every Monday morning.

That model maps directly onto how tests should work. A test suite runs to completion, produces a result, and exits. Kubernetes already has the primitive. The question is whether your testing layer knows how to use it.

Why running tests inside CI pipelines creates environment mismatches — and what decoupled test execution changes.

Read: Tests outside CI →

Why running tests as jobs matters

When tests run inside CI pipeline steps, they inherit the constraints of the pipeline runner: a single execution context, shared compute, limited parallelism, and no native access to the cluster's networking or secrets.

Running tests as Kubernetes jobs changes all of that.

CapabilityCI pipeline stepKubernetes job
NetworkingExternal — requires tunnels or exposed endpointsInternal — same namespace as your services
SecretsManaged separately in pipeline YAMLNative — ConfigMaps and Secrets work as-is
ScalingLinear — add more runnersElastic — bin-packed across cluster nodes
OutputScattered across pipeline logsStructured — logs, artifacts, exit codes via standard K8s patterns

A test pod in the same namespace as your application can reach it directly, no port forwarding needed. Secrets inject the same way they do for any other workload. Ten suites in parallel means ten jobs. Kubernetes handles placement. There is no separate runner fleet to provision or maintain.

The DIY path and where it breaks

Some teams get here by building it themselves. The typical path: write a container for the test suite, create a Job manifest, apply it with kubectl, check the logs manually, delete the pod when done.

That works for one test. It does not scale.

The problems start when you need to:

  • Run multiple suites across environments
  • Parameterize tests with environment-specific config
  • Collect artifacts reliably after each run
  • Trigger tests from CI, schedules, or events
  • Surface pass/fail results somewhere a human can act on them

At that point you are building a test orchestration system. You are writing controllers, managing cleanup, handling retries, aggregating logs, and maintaining all of it as the cluster evolves.

The underlying primitive, the Kubernetes Job, is right. The layer on top is what takes real engineering effort to build and maintain.

How platform teams are using test orchestration to standardize execution across every team, environment, and tool.

Read: Test unification →

What Testkube adds

Testkube is a test orchestration platform that runs as a native Kubernetes operator inside your own cluster. It uses Kubernetes Jobs as its execution primitive, which means tests run exactly as described above: as pods, inside the cluster, with full access to the cluster's networking, secrets, and compute.

What Testkube adds on top is the orchestration layer teams would otherwise build themselves.

Test workflows are version-controlled and reusable across environments. A single workflow definition can run against dev, staging, and production with environment-specific configuration injected at runtime. No need to duplicate manifests per environment.

Triggers come from anywhere. Testkube integrates with GitHub Actions, GitLab CI, Jenkins, Argo, and other CI/CD tools so that pipelines can trigger test runs via API or CLI. Tests can also be scheduled directly, triggered by Kubernetes events, or run on demand. The trigger and the execution are separate concerns.

Results are centralized. Every run produces logs, artifacts, and a structured result in the Testkube dashboard. Pass/fail history, execution trends, and failure details are visible in one place across all environments and all test types, regardless of which tool ran them.

The test suite itself does not change. Playwright tests, k6 scripts, JMeter configurations, Postman collections: all run as-is. Testkube wraps them in a Kubernetes Job and manages the rest.

Run your first test as a Kubernetes job

Start a free trial to explore how teams run, schedule, and observe tests natively inside their containerized environments.

Start free trial

What this looks like for a platform team

A platform team running 80+ microservices across Kubernetes was triggering most of its tests manually. QA engineers ran suites before releases, SRE ran health checks during incidents, and developers ran integration tests ad hoc. There was no shared execution layer, no central results view, and no way to run tests without the right person available.

After adopting Testkube, test workflows were defined once and stored in version control. Any team member could trigger a run from the dashboard or CLI without needing to know which tool the test used or how to configure the environment. Tests ran as Kubernetes jobs inside the cluster, using the same Argo and Spinnaker integrations the team already operated. During a weekend P1 incident, an SRE pulled up a saved test workflow, executed it, and had a quality signal within minutes, without waiting for a QA engineer.

The underlying change was not the tests. It was where and how they ran.

Practical starting point

If you are already running applications in Kubernetes and want to start running tests the same way, the path is straightforward:

  1. Install the Testkube agent into your cluster via Helm
  2. Connect it to the Testkube control plane
  3. Point it at an existing test — a k6 script, a Playwright suite, a Postman collection
  4. Create a test workflow and run it once from the dashboard to confirm it executes correctly as a Kubernetes job

Adding schedules, environment parameters, and CI triggers is incremental from there. The test does not change. The execution model does.

See Testkube in action

Start a free trial to explore how teams orchestrate tests across their containerized environments.

Start free trial

About Testkube

Testkube is a cloud-native continuous testing platform for Kubernetes. It runs tests directly in your clusters, works with any CI/CD system, and supports every testing tool your team uses. By removing CI/CD bottlenecks, Testkube helps teams ship faster with confidence.
Explore the sandbox to see Testkube in action.