Table of Contents
Try Testkube free. No setup needed.
Try Testkube free. No setup needed.




Table of Contents
Executive Summary
Three days, thousands of attendees, an agenda that had clearly moved on from where it was even a year ago.
If you were there, you probably felt it: the questions have gotten more specific. Teams are running things in production. The "are we ready for this?" conversations have mostly been replaced by "how do we do this well?" We spent those three days at our booth talking to engineers, architects, platform leads, and a few executives who had very precise problems they needed solved.
Here's what stood out:
1. AI on Kubernetes stopped being aspirational
The keynotes made it clear: this wasn't a "here's what's coming" conversation anymore. NVIDIA's team walked through how they're making Kubernetes AI-optimized and reproducible. Wayve's engineers showed how they handle GPU scheduling for AI inference at scale. Amazon EKS presented on engineering an invisible Kubernetes, abstracting away the infrastructure complexity that's gotten in the way of teams moving fast. And Cloud Native AI + Kubeflow Day ran as a full co-located event alongside the main conference.
Organizations have made the call on AI workloads. KubeCon Amsterdam confirmed that the conversations now are about infrastructure that can actually support them. Sessions like "Route, Serve, Adapt, Repeat: Adaptive Routing for AI Inference Workloads in Kubernetes" and "Make GenAI Production-Ready With Kubernetes Patterns" weren't drawing curious attendees. They were drawing teams with specific implementation problems.
At our booth, I heard this reflected in a very specific way: people weren't asking whether AI belonged in their testing workflow. They were asking how Testkube's AI agents work, whether they could try them, and what it would take to connect them to their own infrastructure. The curiosity was practical and pointed, more than we expected.
2. MCP went from buzzword to engineering problem
If there was a single topic that felt genuinely new at this KubeCon, it was MCP. The Model Context Protocol showed up not as a futurist topic but as an active implementation challenge.
Christian Posta from Solo.io ran a session called "Enterprise Challenges with MCP Adoption." Tommy Nguyen from Liftoff presented on "Driving Adoption and Automation With MCP in Production." And one of the keynotes looked at where open source AI is headed, with agents as a central thread.
Teams are no longer asking "should we explore agentic workflows?" They're asking "how do we connect agents to our systems safely, at scale, without building brittle custom integrations for every tool?" That question has real engineering weight behind it.
The pattern I kept seeing at our booth was engineers who had already started experimenting with AI agents on their own infrastructure and were now thinking about what it would look like to give those agents real, native access to test data: execution context, logs, artifacts, historical results. That is exactly what Testkube AI Agents is built around, and the conversations consistently went deeper once that clicked.
3. Platform Engineering is past the introductory phase
Platform Engineering Day has been on the KubeCon agenda for a couple of years now, but there was a noticeable maturity shift in Amsterdam. The sessions weren't defining the concept or making the case for why it matters. "Learning Lounge: What Platform Engineers Need to Know About Developer Experience" took it as read that you're already building an internal platform, and focused on what comes next.
The early Platform Engineering conversation was about persuasion: convincing leadership that a dedicated platform team was worth the investment. The 2026 conversation is about operational depth: how do you make the platform something developers actually want to use, and how do you measure whether it's working?
We saw that reflected in who showed up at our booth and how quickly the conversation gained momentum. The people who resonated most weren't in a discovery mindset. They were architects and platform engineers who had already established testing as part of their platform's responsibility, and were trying to figure out the right infrastructure layer to run it on.
4. Kubernetes infrastructure itself has become a thing you need to test
The signal was scattered across the agenda, but once you spotted the pattern, it kept showing up. "From Idle to Ideal: Cross-Cluster GPU Sharing with CoHDI." The panel on "How Will Customized Kubernetes Distributions Work for You?" The edge-focused sessions under Kubernetes on Edge Day. The throughline: Kubernetes infrastructure is no longer a stable background assumption. It's dynamic, complex, and consequential enough to need deliberate validation.
Teams aren't just deploying applications on Kubernetes anymore. They're managing multi-cluster environments, customized distributions, GPU-enabled nodes, edge deployments, and configurations that need to behave correctly across all of it. When infrastructure is that complex, "it deploys" is not a sufficient validation signal.
This was one of the most consistent conversations we had across all three days. Engineers from companies in energy, retail, and financial services came asking not just about testing their applications, but about validating the infrastructure itself: whether their clusters are actually healthy, whether their environments behave the way they expect. These weren't exploratory questions. They already understood the problem.
5. Where tests run is becoming a trust and compliance question
This one was quieter at the session level but showed up constantly in hallway conversations. Observability Day and Open Source SecurityCon both ran as full-day co-located events. And in regulated industries, the recurring question wasn't "can we test?" It was "can we prove what ran, where, and with what data?"
The "run tests inside your own cluster" architecture is shifting from a performance optimization to a compliance signal. For teams in financial services, energy, and other regulated verticals, the fact that test execution happens natively inside their infrastructure has become a real decision factor.
What landed hardest in those conversations was showing that Testkube runs tests as native Kubernetes jobs inside your existing cluster. Not an external service calling into your environment, but execution that lives where your applications live. For teams dealing with audit requirements, data sovereignty concerns, or air-gapped environments, that's a prerequisite, not a differentiator.
What this means for the rest of 2026
Across all five of these conversations, one thing came up consistently: the environment your tests run in matters. Not just for performance, but for correctness, for confidence, for the ability to prove that what you shipped is what you tested. That's true whether you're running AI workloads, building an internal platform, managing complex Kubernetes infrastructure, or navigating compliance requirements.
The cloud-native teams we talked to in Amsterdam aren't in an exploratory phase anymore. They're making deliberate tooling decisions, and testing infrastructure is part of that conversation.
Thanks to everyone who stopped by the booth. See you in Salt Lake City.


About Testkube
Testkube is a cloud-native continuous testing platform for Kubernetes. It runs tests directly in your clusters, works with any CI/CD system, and supports every testing tool your team uses. By removing CI/CD bottlenecks, Testkube helps teams ship faster with confidence.
Explore the sandbox to see Testkube in action.




.png)
