What's New in Testkube - January 2026

Feb 9, 2026
read
Ole Lensmar
CTO
Testkube
Read more from
Ole Lensmar
Ole Lensmar
CTO
Testkube

Table of Contents

Try Testkube instantly in our sandbox. No setup needed.

Try Testkube instantly in our sandbox. No setup needed.

Subscribe to Testkube's Monthly Newsletter
to stay up to date

You have successfully subscribed to the Testkube newsletter.
You have successfully subscribed to the Testkube newsletter.
Oops! Something went wrong while submitting the form.
Feb 9, 2026
read
Ole Lensmar
CTO
Testkube
Read more from
Ole Lensmar
Ole Lensmar
CTO
Testkube
Introducing Testkube AI Agents: native AI agents that analyze failures, correlate data across systems, and propose fixes — all without leaving Testkube.

Table of Contents

Executive Summary

Hey everyone! Today we’re excited to announce the January 2026 Testkube release, introducing Testkube AI Agents. This new capability lets you create, manage, and run AI agents directly inside Testkube, with native access to your test workflows, execution data, and historical results.

For those of you who are new here or need a quick refresher, Testkube is a test orchestration platform that runs your automated tests across modern cloud and container environments. Instead of hardcoding tests into CI/CD pipelines, Testkube gives you a central place to run, manage, and observe all your tests, whether you’re using k6, Playwright, Cypress, or any other tool. This makes pipelines simpler and testing more flexible, which is why Testkube is the foundation for the feature we’re introducing today.

Here are the highlights:

  • Native Agent Framework: Host and manage AI agents with direct access to your test workflows, execution data, and historical results
  • Three ready-to-use workflows: Advanced failure analysis, automated remediation, and enhanced external context via MCP
  • Free during preview: Full access to the Testkube AI Agents at no additional cost while we refine the feature
  • Extensible foundation: Build your own custom agents or use our sample templates

The problem we're solving

CI/CD pipelines run tests and surface pass/fail status, but they don't tell you why something failed. That gap eats up engineering time. Think scrolling logs, cross-referencing runs, checking commits, piecing together root causes.

Agentic test orchestration can help, but it requires infrastructure: agent communication (MCP server), a workflow catalog, results collection, an execution engine, and analytics. Testkube already has these pieces. What's been missing is an AI-oriented framework to actually create and run agents that leverage all of this without the complexity of external tools like N8N.io or the scalability limits of desktop IDE frameworks.

Testkube AI Agents

This release introduces Testkube AI Agents, a native capability for hosting, managing, and executing AI agents on Testkube. AI Agents connects prompts to major LLM providers, gives agents access to Testkube's workflow catalog, execution data, logs, artifacts, and insights analytics, all via the Testkube MCP server.

Agents don't start from scratch. They have native access to your test workflows, can query execution history, understand which tests are relevant based on code changes, and can integrate with external systems (GitHub, Kubernetes, observability tools) via a vast ecosystem of publicly available MCP servers.

Testkube AI agents operate at cluster scale, not IDE scale. They reason over test suites, distributed pipelines, and multi-environment executions where the real complexity and time sinks live.

What you can do with it today

You can build agents for any workflow that requires reasoning about test results. Here are three we've already built that you can start using today. These agents run interactively, so you can ask follow-up questions, request deeper analysis on specific time ranges, or correlate failures with external factors.

Advanced troubleshooting and failure analysis

When a test fails, manual investigation is slow: open logs, scroll hundreds of lines, check previous runs, correlate with environment state, figure out if it's a real bug or flakiness.

A Testkube AI Agent built for Failure Analysis can handle this automatically. It detects patterns across environments and delivers a plain-language summary of what went wrong.

Outcome: Investigation time drops from 30 minutes to under 5. You stay in Testkube instead of context-switching, and catch flaky tests faster.

Automated remediation

Finding the root cause is half the battle. Fixing it means switching to your IDE, pulling code, making changes, creating branches, writing commits, opening PRs. That's 10-30 minutes per fix.

A Testkube AI Agent built for Remediation closes the loop. They analyze failures, correlate with recent code changes, generate fixes, and open pull requests. The best part? It all happens from Testkube.

How it works: agents combine the Testkube MCP Server (for test data) with GitHub or GitLab MCP Servers (for repository access). Link workflows to repositories via annotations, trigger remediation from a failed execution, and review the proposed PR.

The agent doesn't merge anything. It proposes, explains, and hands off. You review and approve. Built-in guardrails keep this safe: scoped tokens, no file deletion, no auto-merge, optional manual approval for sensitive actions.

Outcome: Remediation time drops from 30+ minutes to under 5. Release cycles accelerate.

Cross-System Root Cause Analysis

Test failures don't happen in isolation. A failing API test might trace back to a code change. A flaky test might stem from infrastructure instability. The data you need lives across tools like GitHub, Kubernetes, Datadog, and your test platform.

Testkube AI Agents can pull context from external systems via MCP servers. Troubleshooting agents can access source code changes, infrastructure state, and observability data while analyzing failures. It correlates failures with commits, checking cluster health, or pulling monitoring metrics.

Example: a flakiness agent examines test logs, checks GitHub for recent changes to the test code, and surfaces the correlation if a commit modified the failing test. It could also interact with Kubernetes to see if any cluster-events could be correlated to unexpected test failures.

Configure by connecting external MCP servers (GitHub, Kubernetes, observability tools), then run agents against failed executions. Optionally create Jira tickets or post to Slack automatically.

Outcome: Faster root cause discovery by auto-correlating failures with code and infrastructure changes. No tool-hopping.

Free during preview, extensible for the future

The Testkube AI Agents feature is available at no additional cost while we refine the feature based on real-world usage. We'll introduce pricing in the future, but for now, you have full access to all its benefits.

The example agents mentioned in this announcement are to help you get started. But the feature is designed for extensibility, you can build custom agents, connect different LLMs, tailor prompts to your workflows, and combine data sources to match your infrastructure.

Why does agentic orchestration matter? Read the deep dive →

Also in this release

Beyond Testkube AI Agents, we've made several platform improvements:

Silent Workflows: Going beyond Silent Executions introduced in the previous release, Testkube now allows you to silence Workflows entirely. Silenced Workflows are excluded from standard reporting, analytics, and alerting while still being executable.

Bug Fixes and Stability Improvements:

  • General
    • Fixed environment variable expression preservation in services to allow env var overrides
    • Fixed test trigger selector to accept lowercase values
    • Fixed self-registration error handling with improved messaging
  • Global Template Support
    • Fixed support for global templates in OSS Control Plane for unresolved workflows
    • Improved inline global template handling
  • Scheduler Improvements
    • Added scheduler metrics for better observability
    • Fixed cron scheduler execution context to avoid canceled watcher contexts

Security Updates: Upgraded core images to be in line with latest security fixes.

MCP Server Enhancements: Improved tool performance and reduced token usage for tool responses.

Want the full details? Explore the technical docs →

Get Started

The January Release is available now. If you're using the Cloud Control Plane of Testkube, you can start configuring agents and connecting MCP servers right away. If you’re using the On-Prem version of Testkube, you can update to this release as described in our documentation. And if you're new to Testkube, this is a great time to try it out.

We're also hosting a webinar to walk through Testkube AI Agents, demo the three workflows, and answer your questions live. Sign up here!

As always, we'd love to hear what you think. If you run into issues, have feedback, or want to share how you're using AI Agents in your workflows, reach out. This is just the beginning.

About Testkube

Testkube is a cloud-native continuous testing platform for Kubernetes. It runs tests directly in your clusters, works with any CI/CD system, and supports every testing tool your team uses. By removing CI/CD bottlenecks, Testkube helps teams ship faster with confidence.
Explore the sandbox to see Testkube in action.