Webinar #27

November 2nd, 2022

Length: 46 min

Human vs machine: How to achieve testing balance without ruining your budget

Where should you automate tests versus testing via crowds? Identify a framework for how much testing costs you across both categories and learn how to build a synchronized manual and automation workflow for true testing excellence.

Who is this webinar for:

  • Software team quality leadership responsible for driving better and faster software releases in your organization
  • Developers looking to save time on their testing to refine their devops workflow and drive testing at speed

What you're going to learn:

  • Learn why businesses automate 20% fewer tests than they intend to every year and understand the research behind it
  • Learn a vital framework for understanding how much your tests are costing you in monetary and time terms
  • Understand how manual and automatic tests are valued
  • What would Paul Graham think about manual testing?
  • How “time is money” applies to QA and you can get more of both


00:00  Introduction

03:16  What is "Crowd Testing"?

08:25  Consolidated Convenience

09:20  The state of manual testing

13:48  Trouble with flaky tests

16:10  Deployment pain

17:20  True time cost of testing

21:15  Context switching

22:10  Demo!

31:40  Try Global App Testing

32:44  Actual cost of crowd testing

42:52  Q&A





Hello, everyone, welcome to yet another Buddy Webinar. Today's topic is human versus machine, how to achieve testing balance without ruining your budget. And with me, I have a true expert on not ruining our budget, Karol Topolski. Karol, tell us a few words about yourself.



Sure. So hello, everyone. My name is Karol, and I'm an engineering manager at Global App Testing (GAT), and yeah, today we're going to put two very different things into one thing, we're going to talk about humans versus machines, when it comes to the topic of the software testing. So I hope you will, you know, have a fun, learn something.



And my name is Maciek Palmowski. I work here at Buddy as WordPress ambassador. And I am, I hope, your favorite webinar host. So today's topic is quite interesting, especially that very often, we had a chance to discuss about how important testing is, about different methods. But overall, every time we were mostly talking about automated testing, in different ways - from unit testing to functional testing, and ending with end-to-end testing, and today, we will talk about something different because we will talk about how to automate manual testing. So I really can't wait for Karol to say a bit more about this. And before we go further. If you have any questions regarding today's webinar, don't hesitate to use the comment section on YouTube, LinkedIn or Facebook, and just ask them, we'll try to answer most of them at least during the Q&A session in the end. And if you like what we are doing, don't forget to press the subscribe button on YouTube. So Karl, the scene is yours.

What is "Crowd Testing"?



Thank you very much, Maciek. Let's go. Okay, so I'm pretty sure all of you are familiar with the term of automated testing, Maciek already mentioned at least couple of different sides of that automated testing, you know, unit testing, functional testing, end to end testing all of these shades of testing. However, today, I'm going to put the automated testing in ring with one other contester. And that's crowd testing. So crowd testing might be an exotic term. But the truth is that the name really tells the story. So how can you imagine is like, maybe some of you already uses like the Amazon cloud services. So using, you know, any kind of cloud is pretty cool, right? You know, input your data, you get the result back, pretty cool. You don't have to worry about pretty much anything apart from configuring this. And you know, once you have it running, it's pretty much no troubles. So imagine, like a cloud, but not consisting of machines, instead consisting of the testers, the humans. And the model doesn't really change your input your test requirements, and what you get back - are the test results. So how to do that, how can you know actually leverage this crowd to gain some money back? So first things first, let me tell you about the Global App Testing services. I'm an engineering manager at Global App Testing, and what we do well, you might guess, we're a cloud testing company. We serve all kinds of customers from small SaaS companies, to large enterprises. And the way that we want to stand out is we want to be integrated. And I know that you know, integration is a buzzword, pretty much nowadays, everything is integrated IoT, all of this stuff. We try to put this integration word into an actual use when it comes to this crowd testing, the testers crowd. Swo I'm not going to go through the slides, because it's pointless. What I want you to remember is that not only is this already embedded in the tools that we use, it's also pretty scalable. And what do I mean by scalable, is that, well, the word "crowd" already means more than few people. In our case, it means more like 60,000 people worldwide, present in nearly 200 countries. And, you know, this brings some of the benefits to the table, because you might as well run your tests at 3AM, guess what it's going to be 3PM, somewhere in the world, or, you know, other business hour. So there's going to be, you know, testers that aren't sleeping, that you know, are working, and these are their business hours, and you know, they will be able to test your application. And it doesn't end here, because having testers in multiple countries also brings you in on some of the culture value back. So this testers will be able to fill out your application basing on what's the culture inside their country. And also, you know, some countries aren't as advanced as the first world countries. And they might have some devices that's already very outdated, you know, in our perspective, but they're very much used elsewhere in the world. So by having this large crowd of 60,000, testers, covering pretty much worldwide, we're able to scale your business and pretty much scale with other businesses as well. So having this out of the table, let's talk about the integrations. So as I mentioned, we'd like it to be already in the tools that you use, including, of course, Buddy. And what you can see here is now some standard ticket management systems, JIRA, Trello, you can see, you know, some communication tools like Slack, you can see some CI systems like, Buddy, but you can also see some of the logos that come from the testing community. And these are the test management systems. And why on earth would we show what might be our competitors, is because they really aren't our competitors, they are partners, because we try to achieve, you know, this Unix philosophy, where the tool does one thing and does it right. So we do crowd testing, and we try to do crowd testing right. So that means that we are not putting in, you know, the test management, error monitoring, test infrastructure, all of that, you know, our partners can take care of that we focus on what matters, we focus on the crowd testing. And yeah, you know, obviously, there's no need to install, like any other software we already embedded inside of the systems. That goes without saying. Okay, so now that you know a little bit more about the Global App Testing, and about, you know, the premise of the crowd testing, let's actually talk about consolidated convenience.

Consolidated Convenience



So I'm pretty sure most of you are familiar with the technology from Thoughtworks. And in one of the releases, one of the recent releases or the technology rather. There has been coined the term "consolidated convenience". The full term is: consolidated convenience over best in class, that match integrated toolset. That means that as much as I'd like to be a Google or meta, and you know, have these shiny tools from Google and Meta that they use, the truth is, you're likely not Google and you will likely not Meta, and you likely don't need all of this complicated tuning, you're probably perfectly good with the tools that you already have. And that means that the systems that's already embedded inside your tooling, are much more valuable in terms of, you know, wasted time and efforts that all of these you know, shiny tooling that you know, you need to install, configure and then you can feel like Google. Okay, so, we covered a bit of development, testing covered a bit of consolidated convenience.

The state of manual testing



Let's talk about some statistics. Let's talk about the state of the manual testing. So, I will be showing you some of the diagrams in this presentation. They come from the TestRail annual survey results from 2021 and 2022. This is a test case management system. And year by year they you know, do their survey and they ask respondents some difficult questions to answer. And this might be one of the most interesting. So starting 2018 they started asking the respondents a question of how much, in terms of percentage, let's say code coverage, they have covered with the tests, and how much would they like to be, would like to have covered with the tests in the next year. And as you can see, people have ambitions, you know, 20%. Sounds real. The unfortunate thing is that something doesn't work. So there seems to be some kind of a problem. So, what's the problem with automated testing? Like, what's wrong? Here's another slide from this survey. And it's, you know, simple question, what's your team top three to five biggest challenges? You know, it shouldn't be a problem. And let's see, you have developing automated tests on the first position. And that's already a bit puzzling. But on the second position, you have having enough time to complete QA tasks, which starts to make sense, how can you write automated tests, if you don't even have time, to complete manual testing? Like, you're not a time wizard, now the further we go, the worse it gets end-to-end testing across integrated systems, obviously, a very hard thing. Managing data and testing environments, another hard thing you know, to get right. And I guess you can say that, alright, sure, you know, we have budget we have, you know, humans let just hire more QAs. But the problem is that these people, they did have a budget. Actually, in 2019, global automation market size was $12.6 billion. And it's estimated to be $28 billion by 2024. So money is not the problem. These factors are the problem. And it seems like hiring additional QAs doesn't really seem to work. Explaining to the product managers that nine women cannot deliver baby in one month. That seems to be the case with QA as well. Sure, you know, you have you can hire additional QA engineers, but the time to onboard data, understand the system, understand the tooling, the adoption, it all takes time. So that's the idea ratio. So you have mostly automated tests. And let's say there are some edge cases, maybe impossible to automate tests. And I know there's nothing impossible but try, you know, automating end to end tests across Jira, GitHub and a few other tools. That's basically, you know, on selenium selectors, and yeah, good luck with that. It's not that simple actually, and it won't work ones work, let's be honest, it just won't work. No, it won't, like, data test ID, sure data is ,what can change as well. So yeah, it's going to be that simple. But you know, when you think about it, sure. Okay, so I can cover most of my code, with automated tests, and then you know, some of the end-to-end testing across the integration, see, I can have, you know, dedicated QA engineer to do that. So that's the idea of the simulation. But reality is often disappointing. There's a gap and the gap rises. Because in order to maintain your test suite, you need to write new tests. And when you're focusing in on getting this product, right, and pushing out to the time to market, you might call might cut some corners, and some tests might be these corners. But it doesn't really end here.

Trouble with flaky tests



Because there's one more big trouble, which is called flaky tests. And I'm sure most of you are aware of the flaky tests, but simple definition is - flaky test is the test that will pass once and then will fail once, sometimes will pass sometimes we will find you don't know, you know, Friday evening deploy (don't do that by the way) you press "Merge". The test, which, you know, was passing for the four days a week suddenly fails, not a pleasant feeling. And you might be like, alright, flaky test. I have one, maybe two flaky tests, they shouldn't be, you know, hard to release. So let's pull back the statistics again. And now you will be able to see how big of a problem is this? So both Microsoft and Google estimate that over 1/3 of failed builds are due to the flaky tests. And in order to bring you even more numbers at Google, they measured 115k test targets that at least passed, passed and failed once and out of all this 115,000, 41 perecent we're flaky. The Google, Google has a ton of money. They, if they could buy now, less flakyness, they probably will. And Microsoft, they measured 4000 distinct builds sample from their new distributed build system, and out of this four thousand, 26 percent failed due to flakiness. So it's not one or two tests, the problem is real. It's not only now inside your startup, or inside your company. The Big Four also have this problem. And there's no clear answer. So they're letting you know, seems like you can really automate everything. And there's always going to be some problem with the automated test given, I don't know, it will be flaky, it won't be written yet. That's also very much a problem. The biggest problem, let's be honest. Yeah, right. Also, you know, end-to-end testing across integrated systems, something will change that will become flaky. It's hard, in general, it's hard.

Deployment pain



And because it's, it's hard that the DORA - the DevOps Research and Assessment Institute, working side by side with Google, actually coined the term of the deployment pain. And the deployment pain is defined as a measure of fear and anxiety that you as an engineer feel, when you push code to production. So now it has its fancy name, it has to be a problem to get a fancy name. So imagine that, you know, you want to go home, it's like 5pm, you want to deploy your stuff. And even if you have like, great test coverage, there's always this feeling of hesitation of merging at 5pm. You'd rather merge tomorrow morning that merge today afternoon aren't you. And that's the deployment pain. And we'll experience it experience daily. It's here. And unless you merge, and then you log into production, and you click to ensure that everything's right, and only then go home, you will fill this with anxiety. So now, it's a problem. So now that we have problem definition, let's deal with some common myths about the manual testing.

True time cost of testing



So the first one, the most obvious, at least for me, is that manual testing is slower than automated tests. I mean, it's obvious, how long does your pipeline take at, Buddy? Four minutes? Five minutes? The testing probably runs on what? Two minutes? Four? Ten? It's not four hours as it is in the cloud testing. So what gives? But is this right, though? So we're going to talk a little bit of maths, but don't worry, there are the abbreviations, but they will explain to you so don't worry. So first things first, this is the formula for getting the true time cost of testing. Right, then we're dealing with time. So first, the ET is the execution time. So we know automated is faster, because no human can click as fast as your Selenium WebDriver does not possible. But there're two other factors. The second one is a setup time. So how difficult is your test to build in we're talking about, you know, setting up the testing environment, maybe you know, loading some anonymized data from production now to try to measure how it works at scale, maybe testing your database migrations, you know, it's not that simple stuff to test. So setting up the environment for the test run might be tricky. And you know, the larger your test suite, the more likely you are to divide it into you know, a few brackets of the tests, so you can run them parallelized. And then you get the setup time. Again, and again, for each environmet you're building. That costs time, then we have the magic "N" and N here is number of times we're going to use this test. So for example, if you push the production, I don't know, 10 times daily, and this test is part of your test suite, you're going to run it 10 times. Obvious, but how likely is the part of the product that this test is covering to change? And the answer is the bigger organization, the bigger engineering department and suddenly, you know, we don't work in one team, we work in four teams or five temas, this becomes a little bit higher risk. And with the time it's doomed to become flagging, unless you spend, you know, some time dedicated to maintaining your test suite. So as you can see, the time cost of testing is not only limited to how fast your runners go on CI and, as you know, time equals money. So we talked about the time, but now let's add some money into it. So I think you're pretty much aware that the developers are paid far more than the QA engineers, at least typically. So now, if you multiply it by how much an hour of a developer cost, versus how much an hour of a QA engineer cost, now, it becomes even more of a problem, because you're burning through your budget. And it's not only about budget, keep in mind that every hour, two hours, four hours, your developer spent debugging the flaky test, they are not writing something that will get your product to the market. And, you know, there's this Murphy's law, and the test may become flaky, I don't know, day before the product demo to the investors. Now it's a real thing. Murphy's Law is unfortunately, very much real. Something I'll never think about, you know, because of the time, multiplied by the cost of the money it becomes costly. But there're also secondary effects.

Context switching



So there's this popular term known context switching, and it's no longer now inspiring to anyone that meeting costs developer more time than the salespeople. But when you think about how much time it takes for a developer to context switch between working on the feature in one part of the code, and then being forced to abruptly start working on something other on, for example, debugging the flaky tests, that holds a release. And then once you're done with it, context switching, again, to working on a product, there's a lot of time wasted. There's now also lots of the effort wasted. Because let's be real, brainpower is also a thing, and after debugging a flaky test, usually you're very low on this brainpower usually.




So that's what I had from the theory, now it's going to be a demo time. So what we'll do during this demo, is thank you very much Maciek, magic touch. So what you can see here is three sample pipelines I built on my account, under the very cryptic name of buddy-works-try, all of these pipelines will be shared with you after the AMAs. The README is you know, all of the explanation. We're going to focus on the most feature-rich one, which is GitHub integration testing. So let me switch. So first things first, this is this Buddy account is connected to my repository called buddy-works-try. The code is nothing special, it just, you know, some RubyGem I had, my machine doesn't matter. What matters is that we have a pull request, we have a pull request for Buddy webinar. And, you know, there is a change, very simple one, I have come up with "Hello". So now, I will present to you how we can test this code on the PR, using the Buddy and Global App Testing. So first things first, I'm going to add this Buddy user. So I'm going to go into my GitHub integration test. I've defined action, don't worry, they will be shared with you afterwards. The most important thing is that this action, isn't really enough, isn't really anything special. All it does is an JSON request to our API. So you know, no biggie, pretty simple. And don't worry about the attributes of the JSON all will be provided in the documentation. Okay, so once you have it, as you can see, I have got the JSON file here. And it's something that I embedded inside the filesystem. And it's going to be, you know, just very, very simple data. Nothing enlightening. Just, you know, some fancy lorem ipsum from the testing world. All right. We have explanation behind us. So let's actually run the pipeline and hope that, you know, this thing called Live Testing actually works. Okay, so let's see the logs.



And of course, this is the moment when we have to talk while looking at - Oh, that's it. Yeah, Buddy is



pretty fast, isn't it?



This is the magic of cache.



Yeah, prepare environement zero tag, right. Okay, so what changed in our pull request is that I get this message: "Your testing has started" as you can see, this is my avatar, mostly because it uses my GitHub access token, doesn't matter. But what's cool is that when you go to the Pull Request list, there's going to be this yellow dot, when I'm hovering, we'll see that your testing has started. And of course, you can configure your match requests to only be manageable once all the checks are completed. And the checks use commit status API from GitHub. Okay, so the testing has started. So now, it would be nice to actually have someone test it for us. So I'm going to magically switch to my tester account. I'm not logged out fantastisch. And there's our test. Buddy.works demo GitHub integration. So I'm going to confirm. I'm going to use my desktop with the old OSX, let's say a Safari for some reason, we started the test. So first things first, I have some test cases here. These are all taken from the JSON file. I'm going to go through two of them just to illustrate my point. So let's say that now, I'm a tester, I have 58 minutes to finish this test, I should be able to do so in this time window. Okay, so first things first, I got some company page, and I expect a load under one second, let's say it passed. I need to provide a proof of work. So I'm going to just drop, you know, a sample attachment. Submit. Okay, next case, here, I'm not going to be that forgiving. And I'm going to say that although I could see the Slack button, when I click the Slack button, I'm going to say that the prompt does not open. I'm going to submit and overally, I cannot proceed with integrating any Slack channel, if the prompt doesn't open, so I'm going to fail, the result of couldn't see the prompt. Again, I need to provide some attachment, I'm going to provide a simple PNG file on my computer, and submit the test results. So let's say that during the testing, I noticed an issue. An issue not really tied to any of these test cases, just something I noticed when I was you know, browsing through the application when I was doing exploratory testing, if you will. So I'm going to add a new issue. I'm going to copy-paste my sample data which I prepared beforehand. So let's say we're going to go into payments, scary, these are going to be steps to reproduce. And let's say the actual result is an error message stating that vendor account doesn't have complete email. And well, the expected result, I should be able to send my money, I own my money, even if it's on your platform. And let's say that happens all the time, because I couldn't successfully transfer my money. So again, I'm going to add my attachment. It can be very well you know, video, it can be crash log, a .txt, file, your choice. I'm just doing photo because it's simple. And then we're going to add my issue. Okay, so what happened on the GitHub right now is that two issues have been connected to my pull request. And I can already see the previous so let's see the issue first. Okay, so let's see summary, it's impossible to make transactions sure, as the actual result, as the expected result, I can see a screenshot on our platform. I see the production steps. And the most important, I can see when this was reported in the test environment. This one more tiny detail that you might have not noticed is the label called "high". What "high" means is that we actually took this issue, and we ran it for automated severity classifier. So we have a machine learning model trained to classify severity of the issues. It's been trained with hundreds of thousands of issues we've already reported. So probably received a label "high" because I put money or payment or transaction inside, you know of the issue description that that's why we got high, probably. And when you go back, there's also the test case report. For my failed test case, I went into details it's pretty much same thing, summary actual result, all kinds of stuff. So now what you have is a pull request with two issues connected to it. That status is going to still be pending because I believe there's like a one hour window on the QA environment which we're in, and on production things like in four hours window for some testers to join and report results. So we're not going to waste this much time for the final outcome. I'm just going to show the final outcome based on some odd pull requests I did. So in this case, I forward, reported some issues, so I know this request won't pass, and it will display something like that. It will display a red cross with the message that tester have found issues in your app. And at least in my opinion, you should consider disabling the merge in such case and working on resolving the bug. And if everything went well, and there were no bugs found, because you know, that's also a possible outcome, you will see this beautiful green tick with a state that says that test has finished without any issues. So what we did right now is from inside of the GitHub and Buddy, we're able to delegate the testing of our pull requests for testing in the cloud, in this humble role yourself truly. And receive the results from the cloud without leaving Buddy or GitHub. So that's how we know the cloud testing, just as you know, this is the cloud that you're actually used to, can be embedded inside your tooling. And it will be totally transparent. The people sitting somewhere on the word, somewhere in the world testing application, because of an implementation detail. Most of your company doesn't even know they're there. So yeah, that's it. Maciek, I think we can go back to the presentation. Yeah. And of course, after that, it's demo time, there comes a thank you, because that's everything I have prepared for you today. So thank you very much, everyone. Hope you learn something hope there was a little bit interesting for you. And yeah, thanks for the questions.



Ofcourse. Here you go.

Try Global App Testing



And behind, I'm going to put this one sale slide, just in case, you want to scan this beautiful QR code.



So before we will, let's give people a moment, so they can scan it, copy it. And I think that soon it will be available in the comments. In the meantime, because in general, this is something really interesting because we are always being called out about how automated testing is, let's call, it painted superior, because the human testing should be only used for, for those edge cases and things like this. And you mentioned here, two very interesting situations that automated testing just can't do.

Actual cost of crowd testing



First of all, it was the fact of finding some unrelated issues, to the testing suit itself. I mean, if we are having, let's say, a unit test, it will follow what it has to do. And that's it. We as humans are capable of finding things that we shouldn't do. I mean, we are very often doing things that we shouldn't do. And thanks to our curiosity, we are able to sometimes find those things. And there was also one more aspect that was very interesting, at least for me, it was when you mentioned that, because those testers are spread all over the world. They are using different devices. Different diversity, but it's also very often because of, of the language they use. So it sometimes means that the layout is different because left to right versus right to left. So I think we as developers, designers, that are put in some cultural area, let's call it like this. Just don't care. Don't know. And thanks to the fact that someone from Arabic country will check through the Arabic version of the website which is right to left. He will see that okay, maybe on the first site, it looks ok-ish, but because of everything, some things aren't as clear as they as they should. So really, this is the very interesting part about this, this manual testing and also the financial aspect. So right, maybe yeah, because we are both from Poland. The question is, okay, let's imagine I am a small startup from Łódź, right? How much? I want to test my small Facebook clone startup with I just started. So how much?



Sure. Okay. So you can one application to test I assume is going to be like medium sized application, like usual size application? Now the question is, how often do you release? Let's say that you release weekly, because you know, you're a small startup, let's say, you know, you have this weekly demos. So if you have one up medium setup, and you release weekly, it's going to cost you like, $3,000. To 3-4k, something like that. Monthly? Monthly.



this is a developer.



It's not a developer, developer costs a lot more usually



3k? I mean, there is a chance of finding one. Sure. Like I said, I'm from Łódź



you will find a person. Alright.



I will find, yeah, but still. So that's not much. That's not much because in theory, I have access to, you mentioned, 60,000 testers, right? Yep, right under your Enter button. And what are the default values, how many testers will test my application,



By default is going to be like five testers jumping in testing application, for reason for this thing, what we call the express testing. The whole pricing depends on the credit-based model. And you know, the credits, as usual, you know, the more tests you run, the more testers join, you know, the higher credit price is, but also, you know, the credit bundles the more you buy, the cheaper everything is and so on. So, now, the number of tests is actually only the one factor, the other one, you know, is how many applications do you have, because your small startup might have, you know, just one Facebook app, but for big brands, like I don't know, let's, let's say real Facebook, they have a lot of small applications. So for them, you know, the cost might be bigger, or so, for example, if you're released five times a day, let's you know, you're in this DevOps elite circle that releases four times a day, at least. Then it means that you're going to need testing five times a day. So you're going to, you know, test more services. And also, like, you know, you might care about the test suite, you might not always care about the expiratory testing. So reporting the issue, there was not directly connected to the test cases. It also depends, like, tell us like, you know, just to the test cases, you can tell us no, go around, explore a bit, see what you can find. We will get some weird requirements, like, you know, testing a VR devices, or augmented reality stuff. It gets cool, it gets cool pretty fast. And you know, even testers only from one country, it's also achievable. If you want only one device. Sure. Everything's manageable. Sky's the limit. But you know, for simple SaaS from Łódź, given, you know, some Polish discounts, yeah, you're going to be fine. Okay.



Okay, so. So yeah, this is the thing that you mentioned that the myth about about pricing. It's, it's not so obvious. So a bit, different case. So again, I am this small startup from Łódź, I'm still creating my, my Facebook clone, and I got some money, and I am thinking, Should I invest it in a new developer, and push through automated testing? Or maybe I should start with manual testing? To start with? So to have developers go with only with the, with adding new features, and spreading the testing to, for example, to you, to Global App Testing, do you recommend such approach?



Well, you know, first things, first, you're going to hire a developer, only to drive the testing adoption is going to be a tough recruitment, because developers don't like writing tests.



I think I only know two of them who like



Allright, you know that there's an exception to every rule.



I mean, they created the test framework. Yeah. So it's also, in short, you say that it's much better approach, especially at the beginning when you don't have to get my market. So I'm trying to give an example.



Yeah. So we used to have a customer that started testing with us. And we help them you know, to reach the time to market a lot faster now than they would be otherwise, you know, dedicating resources to writing testing. And once they got into market, they found the market fit, they started earning good money. They hired some engineers, you know, that together with us, we're starting to cover it with automated tests. And, you know, slowly, slowly, slowly, the automated test approach, you know, we're getting more and more covered on their site, but it was only possible because we had their back when they were doing it. And sure, you know, they got to the point where they covered most of their application with automated tests. But there were still some weird edge cases, we still do it for them. And also, like, even, let's say that you're 100% certain in your test suite, that you know, it works, it won't break, there won't be a flaky test, you know, pure miracle. We also, you know, do some value add services, because when you think about our testers, it's 60,000 testers worldwide, but it's also 60,000 people worldwide. Which means some interesting stuff, like for example, not only functional testing, but user experience testing, you have access to 60,000 people in different countries on different devices, greater for user experience testing. Lately, we've started doing this thing that we sit with the person who owns the company, you know, some QA engineers from their side and will help create them this test suite, because not all not each company will has this test suite. So you can even you know, help creating one.



Yeah, we had one question. But I see that Adam from your team answered it, but I will still put it because maybe not everyone saw it.




Can you contact to the tester who has reported issue if you don't understand something in the issue, and they can get the hang of them? Yes. It it is when you work hard to ensure that the feedback is clear first time, for example, they submit evidence of different steps by step and picture. So this is okay. So this was the last question. I want to thank you again, it was very interesting webinar, I really learned a lot from from a kind of new world for me, because I'm also one of those person who are about "let's automate everything, automate for life" and things like that.



I guess I'll be honest. It's automate everything versus build versus buy. And when you start talking about the time costs, you know, things get different.



Contenders, they can go hand in hand.



Exactly. Exactly. That's, that's true. You showed so many, so many examples. How we can go hand in hand because we have some automated tests, we are adding some manual tests because of reasons either we don't have a budget or now we don't have time, or we want to make sure we go in more global and we are more about learning the experience. And it's impossible to automate experience.



Actually, when you want to hear some master it's about another learning experience. There's some nice theories in the Leading Quality book, that actually our co-founders wrote. You can get a free copy by scanning this QR code that our current marketing team prepared for you. I know, usually, you know, it's like "Oh, webinar, of course, you gotta download an ebook, because now ebooks conversion, so on." Let me tell you this book was actually Amazon bestseller. So it's not just an e-book. This is a legitimate book. And there's some very, very interesting scenarios. How, you know, if it wasn't for the cloud testing in if it wasn't, for this humans person sitting on the other side of the world prevented you have cultural feedback, your business wouldn't scale. So go through it. You gotta trust me on this one.