Shift Left, Ship Fast: Building a Modern Testing Culture

October 31, 2025
0
:
20
:
49
Ole Lensmar
Ole Lensmar
CTO
Testkube
Thomas To
Thomas To
Product Manager
Axon
Share on X
Share on LinkedIn
Share on Reddit
Share on HackerNews
Copy URL

Table of Contents

Try Testkube instantly in our sandbox. No setup needed.

Subscribe to our monthly newsletter to stay up to date with all-things Testkube.

You have successfully subscribed to the Testkube newsletter.
You have successfully subscribed to the Testkube newsletter.
Oops! Something went wrong while submitting the form.

Transcript

Ole Lensmar: Hello, everybody. Welcome to another episode of the Cloud Native Testing podcast. I'm today in Vilnius at the test conference, which is very appropriate. And I'm delighted to be joined by Thomas To from Axon. Thomas, how are you?

Thomas To: I'm doing great. Thank you, Olé.

Ole Lensmar: Thanks for joining. Yeah, so I think you have a fascinating background and you're doing some fascinating stuff. So please tell us a little bit about yourself and what you're working on.

Thomas To: Yeah, so right now I'm a product manager at Exxon. We're in the public safety domain. So I actually started my career as a software engineer at a test automation company, Calon. And I worked on their flagship product as an engineer. And then I transitioned into product management. still doing some work on cloud testing infrastructure product. And then I spent a little bit of time ⁓ at a business intelligence B2B SaaS startup and now at Exxon. So almost most of my experience has been with technical products for technical audience. So yeah, it's been an interesting journey.

Ole Lensmar: it's interesting. So you started with Katalon and has that kind of, how has that formed your ⁓ kind of view of the testing space and how you've approached testing?

Thomas To: Yeah, so my perspective when I was at Calon is that I thought quality as like a very important aspect of development but for very big companies. So because we had very big customers who are undergoing digital transformation. So they have a lot of legacy systems and so they start primarily with end-to-end tests. It's very difficult for them to do unit testing, integration testing and things like that. So in many cases they want to just add a bunch of end-to-end tests to their web applications or whatever to make sure the core functionality very well and then that kind of like shapes my perspective about testing because later on I learned about the testing pyramid and it said that you should actually you shouldn't add too much end-to-end test and just start at lower layers and so that's when I and other companies. That's the lesson I kind of like took away. If you want to approach testing in an efficient manner, then you should really start from lower layers of testing before moving on to, you know, eventually adding end-to-end tests.

Ole Lensmar: Totally agree. I'm justwondering how much the concept of like shift left, I'm sure you're familiar, how much do you think that drives or plays a part in kind of that testing pyramid and focusing on unit tests versus end-to-end tests?

Thomas To: Yeah, I think that's really interesting because when you're talking about shift lab, we're talking about moving quality upstream. And so it is about enabling developers to own quality. so you start to think about a whole different category of testing tools that are designed for developers. So this is very different from tools that are designed to do end-to-end testing. ⁓ because a lot of the users for these kind of like testing platform, end-to-end testing platforms are QA. But when you're doing shift lab, it's actually, they're still using the platform, but it's actually not a primary target user. So you're facing a different set of constraints and serving a different segment.

Ole Lensmar: And so do feel like there's a category of testers or QA who is not able to or like embrace shift left or who kind of are not immune to the shift left initiative because they're stuck in tools that don't allow them to shift left in the same way or processes or how do see that?

Thomas To: Yeah, I think there's definitely that group of QA. And one of the challenges is, course, in terms of tooling, but also because of mindset. Because when you talk about shift-left testing, a lot of QA seem to think that it's going to take away their job. So because they spend a lot of time doing manual testing or like automating end-to-end tests, when that

work gets shifted left, it seems to leave them with some concerns about their own futures. So that's where I think, know, not only

toolings but best practices and the sort of like testing culture is really important because when you shift lab QA is actually get to do more interesting things in terms of building quality toolings and enabling engineers and the organization to embrace the quality mindset. So I think that's kind of like what is happening for what I've seen.

Ole Lensmar: No, I totally agree. It feels to me like that's almost like an empowering in the sense that you can maybe do more exploratory testing. And I'm also guessing that shift left. mean, the developers still have to write code. So even if you want to, if you want to create more tests upstream, those still have to be written as automated tests. And that's also maybe a path for many QA professionals. So instead of doing manual tests and writing manual test scripts, are now writing automated tests or writing code that will be integrated into the pipeline.

Thomas To: Yeah, so I think that's definitely a Stage where that many companies are in what I've seen is that developers oftentimes when they embrace enough of the quality mindset they actually want to not only own the development of manual test cases, but they also want to the automation of that. So it means that QAs now are doing sort of like enablement more than actually writing and automating test cases. And also some of the QAs that are sort of more technical, they transition into more of a platform engineering roles that are specifically dedicated for quality enablement and toolings.

Ole Lensmar: interesting. I know there's sometimes like some skepticism against automated testing in the QA domain. Is that something that you've encountered? if so, or even if not, what are your thoughts on how to kind of handle that?

Thomas To: Yeah, I think one perspective that So one way to look at quality that I think people are skeptical about is quality is just about like automated tests or testing before release, but I think a ⁓ broader perspective is that quality is a culture some mindset and so People can be skeptical about you know specific testing trends or testing practices, but overall I think everyone is aligned on the fact that quality is very important to organizations. And if you shift your perspective from, you know, how do I write an automated test to how do I identify risks that are important to the organizations in terms of business or technical, it's starting to...

I think empower a lot of people to deal with the skepticism that people generally have with testing because I think it's a very narrow perspective. If you shift your perspective into quality-esque risk management, it's much more holistic and more comprehensive.

Ole Lensmar: I think that's an excellent point to your point is quality is more much more than just automated tests or unit tests in your pipeline. It's a mindset and it has to be kind of part of the process from the beginning. I think developers would probably agree. think the challenge there is then how do you How do you keep that mindset all the way up to the manager level to make, I'm sure everyone subscribes to that quality is important, but how do you then mandate still having quality engineers across the entire process? So to your point, if you're no longer running manual tests, people with a quality mindset or like a tester mindset are somewhat unique, I would say sometimes, they can think of approaching, quality with new perspectives and it's super important to have those people on the team, regardless of its relationship to writing automated tests or assessing requirements or kind of doing more downstream quality activities.

Thomas To: Yeah, definitely.

Ole Lensmar: One thing that we've encountered or I've been encountering quite often is not just the concept of shift left but also the concept of shift right. And I'm wondering if that's something you've seen and in the context of what you're doing now and what that means to you.

Thomas To: Yeah, so shift right testing. in my experience, if you want to test in production, it's got to be very critical workflows and you have to be very strategic. Otherwise, you're going to end up with just a suite of a lot of end-to-end tests that you have to run on production. Another perspective is, especially now we are dealing with cloud services, testing cloud services. End-to-end test alone are not sufficient. You actually want to be able to hit your services in production. with kind of like a signal, some synthetic monitoring before services so that you can pinpoint exactly which services are failing because sometimes end-to-end tests, cover the entire workflow that's been across many services. And when an end-to-end test fails, it's very difficult to pinpoint exactly which service is causing the problem. So in addition to end-to-end tests, you want to also have some sort of I guess you can call it like in cluster test that hit your services on production and you have to do it very carefully because you don't want to run test that modify data you mostly want to do something safe like read or even if you have to write your test, have to write some data, you have to be very careful because it's in production, it's gonna affect people. So that's what I've seen in my experience.

Ole Lensmar: I totally agree. You don't want to change the state of your system just for the sake of testing. think there's what's also interesting is when I've talked to others about Shift-Right is about the quality metrics they look at. to your point, it could be running functional tests, but many there are other other quality metrics like around performance around latency, of course, but also there are performance quality metrics just around the pipeline itself. How long time does it take for, you know, a new version of this service to actually go into production and that's a quality metric that you can measure more internally to evaluate the efficiency of your pipelines and your infrastructure and your team. Is that something, those kind of more internally faced quality metrics, is that something you've encountered in your platform or in your work?

Thomas To: Yeah, yeah, metrics like Dora are very important to track, especially if you want to not only care about quality in the sense of from an end user perspective, but quality throughout your deployment pipeline. So we do track Dora metrics and it's... It's an important part of trying to see which services are of high quality and which services need to be developed in terms of quality so that we can prioritize our quality engineering and our efforts of the quality team but on the engineering organization overall as well.

Ole Lensmar: So thank you. So one thing I wanted to ask is, as you work at Axon, and I'm guessing there's some shift to cloud-native technologies going on, and maybe Kubernetes specifically, what challenges around testing have kind of come in the wake of that ⁓ infrastructure transition?

Thomas To: Yeah, so for cloud native testing, I think the challenge that a lot of people face is that you are dealing with a lot of environments. So if you serve customers all over the world, you have to have different production instances for each of your, for each of the customer segments. For example, in the US, people have specific regulations and you have to actually deploy your software across different environments and also means you have to maintain quality across the environments and there's also the difference between different cloud infrastructure providers so for example AWS have different dependencies than Azure and when you're trying to approach quality from a cloud native testing perspective, it has a lot of constraints, only in terms of developing automated tests, but how to get those automated tests into these environments. And some of them may be restricted or difficult to manage and deploy your test on. So I think these are the challenges that a lot of people

face when it comes to testing Kubernetes and cloud things.

Ole Lensmar: super interesting. So you're saying that if I have clusters in different geographic locations globally and I want to run tests in those, that might be kind of affected by regulations, regulatory aspects of those local regions or security aspects?

Thomas To: Yeah, exactly. what I've seen is that in many cases, engineering teams and QA teams are in different countries. So if you want to deploy your code to a production instance in the US, for example, and the engineering teams and the QA teams in Vietnam or in other Southeast Asian countries, they cannot gain access to that environment directly. perform testing there. So it gets a little bit tricky there. You have to navigate through all of the regulation and see what kind of data can you actually retrieve or what kind of actions can you perform. And that restricts the parameters of what you're able to test.

Ole Lensmar: wow, that's super fascinating. must also be, it require a lot of planning. So you don't want to kind of be slowed down by these kind of regulatory aspects of testing. I'm, but I'm guessing it kind of slows down the overall release processes, etc. If you have these kind of hurdles to jump over ⁓ for your testing.

Thomas To: Yeah, it requires ⁓ more deliberate planning, but also you have to have a very strong quality culture in order to overcome these challenges because it's very easy to give up and say it's too difficult, we're not going to do it. But that's actually not going to cut it if you want to deliver truly high quality software. So I think that's an enabling strength in the sense that yes it limits what you're able to test but it also forces you to be more strategic and more creative in terms of how you're able to deliver experience to your customers in a high quality manner. ⁓

Ole Lensmar: If you kind of manage to handle those obstacles, you're more competitive, I'm guessing, because you can deliver globally in a different way than maybe others who have, to your point, said this is too hard or we're not going to do it. So it might, but to your point, it also goes back to culture, what we said earlier, if there's a quality mindset in the organization, and I'm guessing those hurdles are also easier to overcome versus if it's just a tactical concern.

Thomas To: Yeah, yeah, it's so cultural and mindset. So I think it's very easy to have that you have is it's very easy to say that you have a quality mindset and culture, but ⁓ it's actually about how you act on a day to day basis, how you handle incidents, how you talk about quality, where you actually prioritize quality on your to do list. Those are the things that characterize culture and mindset. So I think that also requires

a very strong leadership with strong opinion about delivering quality products. So I think without all of these characteristics, it's very difficult to even reach a stage where you serve customers across the globe because the bar for quality is getting higher and higher across the world, especially now with

with LLM and these technologies that enable even faster development cycle. People expect you to deliver software or products faster but still meet the same bar of quality.

Ole Lensmar: Well, so thank you for saying LLM. was going to segue over to, I have to ask at least one AI question on these talks. So thanks for doing that. So I have to ask, how is AI impacting your testing and are you using AI for testing or do you have any LLM based applications yourself that you're testing where you need to kind of validate that they are, you know, responsible and all of that?

Thomas To: Yeah, so I think one of the areas that LLMs are most effective is facilitating shift lab. So because if you just say, well, we're going to do shift lab engineers are going to have to do more work, then it's very difficult for engineers to embrace these changes. to actually make the entire testing workflow easier for them. And that's when LLMs come in. applications of people feeding PRDs and documents into a knowledge base, a RACS, a Retriever Augmented Graph that empowers test case generation and also just generally do Q &As on questions that require business domain knowledge. it makes it easier to onboard new hires to the team as well as facilitate engineers to own quality better. So I think that's where I've seen LOMs really become powerful.

Ole Lensmar: It's super interesting. you've built your own database of policies, et cetera. And then you have a RAG system that uses that to provide, use that as context for helping developers or helping just across the entire engineering team for what specific tasks you'll be using that ⁓ RAG system.

Thomas To: ⁓ So it can be used for test case generation as well as because we're in a platform team, we build products for our internal teams. So this Rack system also empowers our Q &A system. For example, if someone has a question about how to use our products, they can actually interact with a chat bot to get the answer. having to wait for people to come up with an answer.

Ole Lensmar: Okay, that's fascinating. have you, I recently did a talk on testing AI systems themselves. So I'm kind of intrigued by, are you, so that RAG system that you've built, are you testing that in any way? you implying, kind of ensuring that, you know, the answers it gives you are relevant or, you know, have accurate as you kind of evolve the underlying database?

Thomas To: Yeah, that's a good question. actually, it's just something that we started to prototype with. And right now, we're not treating it as a production system. So it's not. so we don't have any kind of testing for that system. It's very much like an early prototype. But I've seen from other companies that are doing similar things. They're doing evals for LLMs, and I think that has been effective for them. That's one technique that we also want to look into for our own use case.

Ole Lensmar: it's a fascinating space to your point. How do you test kind of the relevancy of the and the accuracy and the toxicity and everything that these systems can ultimately be prone to? And it goes back to the quality mindset that you talked about earlier. It's even there when you build your rag system, you have to have that ⁓ mindset. And especially in your case, if you if you use it to generate test cases, etc. Obviously, you don't want hallucinations there or any kind of Incorrect data coming out of it super interesting Okay, great. Thank you so much Thomas This has been super fascinating to hear about some of the things you've been working with and the challenge you've had Thank you so much for joining the podcast

Thomas To: Thank you, Olive, for the opportunity and I'm really glad to have this space to talk about my experience and I look forward to what you and the TestSkills team is going to deliver in the future.

Ole Lensmar: Thank you so much. Take care. Goodbye.