Quality and the striving for it is omnipresent, irrespective of whether it is the focus of an organization or not. Delivering high quality is one of the key aims when creating a product, and undermining this is a prep for disaster. It is reported that in Europe alone, around 150 Billion Euros are wasted annually on bad software quality.
Here at Thomann, we have decided to take the quality engineering challenge head-on this year, starting with designing a blueprint focusing on continuous growth while targeting our set quality vision. I, Basal, began with understanding the needs and processes of each team while bringing ideas to the table and discussing with the engineering leads. The idea is to push quality at par with our development and integration process, with automation testing in the center, which would slowly assume a cross-team quality architecture with standardized strategy and implementation.
The imperative need for test automation or ROI
Hyperautomation was listed as one of the top technology trends for this year by Gartner. We are heading toward an era in which everything is quick and autonomous. Software delivery timeframes will need to be sped up to maintain pace with this fast-moving ecosystem, but not at the risk of quality. Testing and Quality Assurance receive a significant seat at the table. Test automation will take precedence to meet the needs for a quicker market time and superior quality.
A qualitative justification of how a rising number of manual regression tests every sprint eventually strangles the team's capacity to deliver is adequate for many stakeholders to support investment in test automation. A quantitative justification helps persuade others, particularly those outside the technological zone, to agree, and it also helps convince them.
Numerous case studies and research examples highlight the benefits of test automation for regression testing, and they tend to follow similar patterns. For instance, the case study "Embedded Software Testing in Research Environment" presents a cost model for test automation that illustrates the potential advantages of this approach. This model suggests that, while the initial investment in test automation may be higher than that of manual testing, the ongoing benefits of automation will eventually lead to a net gain for the organization.
V - Expenditure for test specification and implementation
D - Expenditure for single test execution
n - number of automated test executions
The break-even point can be calculated by comparing the cost of automated testing (Aa) to the cost of manual testing (Am):
E(n) = Aa / Am = (Va + n * Da) / (Vm + n * Dm)
As this model illustrates, the initial cost of implementing test automation may be significantly higher than that of manual test execution. However, it is clear that after a certain point, the ongoing benefits of automation far outweigh the initial investment, resulting in a net gain for the organization.
Long-term, the need for testing measures or reliance on manual testing techniques can lead to costly production errors. Furthermore, as the code volume grows, a manual testing strategy becomes increasingly inefficient, with testing efforts scaling linearly. In contrast, the maintenance of code supported by a comprehensive automated testing suite is more cost-effective in the long run. Implementing automated testing practices ensures the lasting quality and stability of the codebase, mitigating the risk of costly production bugs and improving the overall efficiency of the development process.
So, where and how did I initiate the journey of Quality Engineering at Thomann?
That’s right! I collected data!
After having several meetings with our beloved engineering teams, I had enough data to understand the present functional, performance-related, and architectural quality and how to map these. It is commonly known that metrics and statistics are essential for quality. They are a vital indicator of whether your quality engineering program is taking hold and succeeding. But which measurements are we referring to?
To help us keep track of quality improvement as it happens, we use metrics like:
- The cost of quality,
- Defect metrics
- Coverage
- Performance and release metrics
- Productivity, efficiency, and effectiveness metrics.
Data and analytics should ultimately aid quality engineering folks in anticipating risk, reducing flaws and technological debt, and enhancing agility and time-to-market.
Furthermore, to map the collected data to the above-mentioned quality metrics and how to build up testing from the base, we ask the following, generally pondered upon questions, and try to answer them best.
The first and very simple question:
Where to start?
But this question is just the tip of the iceberg; the more pressing thought would be how to start concerning what is already there. Was the application developed with no testing structure, per se, a test pyramid? Are the team and environment adept with concepts of behavior-driven-dev and test-driven dev? How do you convince your devs to abide by an extra layer of the quality gateway? What tools do you need, and what do you want to work on as a QE?
These questions can be overwhelming, even for an established quality engineer. Here at Thomann, system-level end-to-end tests needed to be included when I started my evaluation and began contributing. Naturally, this impacted the test coverage in a significant way. The structure of the existing unit test and integration tests also needed to be refined. Furthermore, there are four distinct development teams within Thomann’s engineering, so it was challenging to devise a plan to ramp up the quality of all of them simultaneously.
Kickstart into quality!
Writing about my journey with Thomann, we started big! That was the tagline - go big or go home! Aligning with this, I created a workshop to aid existing developers of all four teams in setting up a quality project. I also pushed to have a SPOC from each team, adhering to all quality measures I have accumulated in my testing career. But then we were hit by reality and had to confront that changing everything all at once is ultimately a bad idea. So the new mantra is - start small. Doing so allows you to draw an experiment on a low scale without fearing financial impact or revolutionizing the entire organization. Taking a measured and incremental approach, it is possible to improve the test coverage and testability of the application in a practical and financially sound manner.
Here's my game plan:
1. Keep it core. Keep it functional!
In prioritizing the implementation of automated testing measures, it is essential to focus on the core functionalities of the application that have the greatest impact on the organization's financial health. These key processes should be thoroughly tested, with as much attention to detail as possible, and incorporated into a suite of smoke tests. These tests can be run before and after a release to ensure that no critical processes have been compromised. By placing a high level of emphasis on the testing of core functionalities and process flows, it is possible to mitigate the risk of costly disruptions and ensure the overall stability and reliability of the application.
2. Target on the System level (especially if there are absolutely no tests)
When working with a preexisting application, it is often more feasible to begin implementing automated testing measures at the higher levels of the testing pyramid. This may involve the creation of acceptance or end-to-end tests that verify the functionality of critical features. While unit tests are a valuable component of a comprehensive testing strategy, they may require significant refactoring to be implemented effectively. As such, it is more practical for higher-level tests that can be implemented more efficiently and provide immediate value in terms of ensuring the overall integrity of the application.
3. Define your Quality gates
Slow and steady wins the testing race! But….we do need a clean start. My approach here was to create a sanity-like package, or a smoke test package, with the few but crucial tests we created to address our core functionalities in the first step. This package is executed each time a code is integrated as a thumb rule. If it is green, you have a go, or else you fix the failing tests first, as we cannot afford our bread-and-butter functionalities to be messy.
4. Focus on reviews and regression from the get-go
I am a massive believer in automating as much as you can when it comes to testing. Hence, my test cases are automated on the most essential and proven coding attributes, for example, a page-object model. It is prominent to pay attention to your PRs; at Thomann, we have a four-eyes policy that extends to tests. Regression is such an essential term in the world of software quality that it pops up every day! Therefore, a regression suite must be built from the very beginning of testing an application. This regression suite is executed each time before a PR is merged, validating the work of the QA.
5. BDD Approach
Moreover, I like my tests to be easily comprehensible, not only to the developers and testers but also to any stakeholder. To implement this, I inculcated a BDD approach, which also aids in the documentation of tests, serving multi purposes.
6. Clean Code Check
Once you reach a viable test suite and high test coverage, with a significant number of tests, it will be time to rethink a few things to make your maintenance easier yet intelligent. Especially if you got a lot of contributors, the tendency for a test automation project to go haywire is high, unlike any code-based project. During this phase, you can stick to the old and tested formulae of clean coding to reset your project.
Conclusion
Testing is a continuously evolving process, many proven and industry-established principles can be followed to initiate, for example, focusing on test coverage and maintaining a defect triage. Still, one also needs to consider the system under test closely. Working on an application like Thomann has given me a platform to put in place the most dependable techniques of the quality domain. Still, as we go forward, there also is an opportunity to try out innovative ways which Concordes with the application. There has been so much happening in the quality world recently, which opens us to try innovative approaches. So, let me get this out. I do not want to conclude; we are only getting started.
Nuff said —- Quality engineering or die trying!
P.S.: If you have an insatiable thirst for knowledge about quality engineering and just can't get enough, then keep on reading!