Responsive

Testing Cloud-Native Applications in Regulated Environments with Cherif Sahraoui

June 20, 2025
:
25
:
35
Ole Lensmar
Ole Lensmar
CTO
Testkube
Cherif Sahraoui
Cherif Sahraoui
QA Lead
50Hertz
Share on X
Share on LinkedIn
Share on Reddit
Share on HackerNews
Copy URL

Table of Contents

Subscribe to our monthly newsletter to stay up to date with all-things Testkube.

You have successfully subscribed to the Testkube newsletter.
You have successfully subscribed to the Testkube newsletter.
Oops! Something went wrong while submitting the form.

Transcript

Ole Lensmar:
Hello and welcome to today’s episode of the Cloud Native Testing Podcast. I’m super pleased to be joined by Cherif Sahraoui—hope I got that right—who is a QA Lead at 50Hertz. Cherif Sahraoui, how are you?

Cherif Sahraoui:
Good, thanks! Hello everyone.

Ole Lensmar:
Great to have you here. Can you tell us a bit about your role as a QA Lead—what does that mean in the context of cloud-native testing?

Cherif Sahraoui:
Sure! I’m Cherif Sahraoui, currently working as a QA Lead. I have over five years of experience in testing and automation, and I also worked as a programmer for a year. In my current role—at an energy provider in Germany—we’re working on a private cloud infrastructure, testing various tools. I lead a small team of two other QA engineers, and together we ensure the quality of all tools running in the infrastructure is up to a high standard.

Ole Lensmar:
Nice! So was your project always cloud native, or has it shifted in that direction? I’m curious how your experience with testing has evolved in a cloud-native context.

Cherif Sahraoui:
The project actually started out cloud native. We use providers like GCP for Kubernetes. But over time, we’ve been shifting toward on-prem environments, mainly because of regulations. In the energy sector in Germany, there are strict compliance requirements, so that’s been a major driver.

This shift introduces a lot of testing challenges. For example, you can’t always test things locally—the behavior differs significantly between environments. Connectivity and performance vary, and that can lead to performance issues that need to be accounted for in test automation.

Ole Lensmar:
That makes sense. And with energy providers, I imagine there’s a hardware component too. Do you also test hardware, or is your focus purely software?

Cherif Sahraoui:
It's mostly software. The company is focused on digital transformation—so we have many plants across Germany and Belgium, and they’re all running applications using cloud hyperscalers like AWS, Azure, and GCP.

The goal is to eventually move from relying on these providers to running everything on their own infrastructure. So our work is more like bootstrapping the tools that will later run at these plants. My role is to ensure those tools work properly in all current environments and can be deployed seamlessly across future ones using automation.

Ole Lensmar:
So you’re testing both infrastructure and applications?

Cherif Sahraoui:
Not quite—our product line focuses on application testing, not infrastructure itself. We work with a variety of tools that developers across the company rely on—build tools, deployment tools, collaboration tools like Jira and Confluence.

All of these run in Kubernetes, and we need to ensure they’re configured correctly within our restricted environments. These are often air-gapped, without internet access, so we have to manage dependencies like Helm charts and images through local registries.

Ole Lensmar:
Very interesting. I want to shift back to your role as a QA Lead. Sometimes that title can sound a bit "old school"—like there’s a clear handoff between developers and QA. How does your team collaborate with engineering? Is that different in a cloud-native setup?

Cherif Sahraoui:
It’s more about culture than title. We follow Scrum, so our dailies include both developers and QA engineers—there’s no separation in how we work. We all use Jira, and we collaborate closely.

We’re also very focused on test automation, which means a lot of programming. Even tools like Testkube are deployed via Terraform. We maintain centralized Terraform modules to manage consistent Testkube deployments across multiple environments. Without that, we'd be stuck copying and pasting, which is error-prone and hard to manage.

Ole Lensmar:
Absolutely—consistency is key. Going back to dev and QA: once QA engineers are involved, some developers might think they no longer need to test. How do you ensure developers still take ownership of their testing responsibilities?

Cherif Sahraoui:
That’s a good question. Again, it’s about team culture. Each Jira user story must go through QA, but that starts with the developer. They write unit tests and sometimes provide screenshots or documentation of their work.

If further testing is needed, we create a follow-up QA ticket and link it to the original story. It’s a shared responsibility—testing isn’t only done by the QA team.

Ole Lensmar:
That makes sense. Now, let’s talk about automated testing. What’s your take on balancing that with exploratory testing?

Cherif Sahraoui:
Sometimes automation isn’t feasible. For example, we use Backstage as a developer portal. Its cards trigger complex workflows that create resources across multiple tools like Jira and Confluence. These are hard to automate because they require deep integration and sometimes even lack the needed authentication.

In those cases, exploratory testing is more effective. You can simulate a real user and look for bugs manually. Automation is better for repeatable, predictable scenarios—like our regression test suite, which we run after each release with Testkube.

Ole Lensmar:
Even with a large automation suite, exploratory testing still has value, right? Especially with UIs?

Cherif Sahraoui:
Absolutely. You can’t automate everything. UI behavior, performance, and user experience still need manual checks—especially after releases.

It’s similar to testing in production. While it has a bad reputation, it’s necessary. Smoke testing in production should always be manual to avoid breaking or deleting data unintentionally.

Ole Lensmar:
Totally agree. That topic comes up a lot on this podcast. Many people dismiss testing in production, but synthetic monitoring or manual smoke checks can be critical safety nets. Especially in cloud-native environments where things are always changing.

Cherif Sahraoui:
Exactly. The key is that it’s additional testing—not your only testing. Production validation ensures a smooth user experience. I’ve seen cases where releases went to production without QA visibility, and bugs slipped through. A quick manual smoke test could’ve caught those.

Ole Lensmar:
Let’s talk about how cloud-native delivery changes testing. With modern CI/CD, testing happens throughout the pipeline—GitOps, progressive delivery, etc. Does that align with your experience?

Cherif Sahraoui:
Yes, definitely. We do static testing early on, using tools like SonarQube with pre-commit hooks. Then we have CI testing and post-deployment testing.

For deployments, we use ArgoCD. When an application version is bumped, Testkube waits until ArgoCD finishes, checks the health status, then runs tests. Since everything is Kubernetes-native, it’s easy to integrate.

Ole Lensmar:
If those tests fail post-deployment, do you roll back automatically?

Cherif Sahraoui:
Not yet. The pipeline fails, and the developer is notified. From there, it’s a manual decision. Sometimes it’s a bug in the app, or maybe the config needs updating—we check documentation, GitHub issues, etc., and decide next steps manually.

Ole Lensmar:
You mentioned earlier that you work in a highly regulated industry. How does that affect your testing processes?

Cherif Sahraoui:
There’s a separate compliance and security team that focuses on those areas. But from a QA perspective, we use SonarQube with specific rules to help maintain coding standards and compliance.

Ole Lensmar:
How about metrics and reporting? What do you track as a team?

Cherif Sahraoui:
The key metric is the percentage of passed test cases. We use TestRail as our central test management tool. Testkube gives us visibility into test runs, but with multiple teams involved, we need centralized reporting.

If a test fails, a bug ticket is created and linked to the failed case. For now, we don’t do trend analysis over time—but that’s something we may add in the future.

Ole Lensmar:
Got it. Last question—what about AI? Are you using AI for testing?

Cherif Sahraoui:
Not directly for testing yet, but we do use large language models for code generation. We’ve explored tools like Browserless, which uses LLMs to identify UI elements and perform tests using natural language prompts. It’s similar to Selenium, but more intuitive.

Ole Lensmar:
That’s fascinating. It reminds me of BDD—except with AI, you could just say “click this” and “expect that,” and the model figures it out. But of course, tests still need to be deterministic to be valuable.

Cherif Sahraoui:
Exactly. Right now, we’re just exploring it. It’s not production-ready yet, but we’re keeping an eye on the space. Eventually, I’d like to introduce some of these tools, but we’ll need buy-in from management.

Ole Lensmar:
Makes sense. Cherif Sahraoui, thank you so much—this was incredibly insightful. I really appreciate you sharing your experience, and I’m sure our listeners will take away a lot from this. Thanks also to Evan for producing. See you next time!

Cherif Sahraoui:
Thank you—it was a pleasure. Goodbye!

Tags
No items found.