Automated SLO-based testing can transform your delivery pipeline;

By Andreas Grabner, DevOps Activit, Dynatrace.

More and more organisations should be embracing SLIs (Service Level Indicators) and SLOs (Service Level Objectives) as part of their testing strategy and delivery pipeline. Not only does automating SLO-based testing make the process easier, it enables software testers to deliver feedback faster and highlight issues earlier in the delivery cycle. Furthermore, it arguably ensures a more structured and better quality of testing.

What are SLIs and SLOs?

An SLI refers to the metrics or indicators of a particular tool that are interesting to the tester. Meanwhile, an SLO means the criteria which measures how good or bad an application is, based on the indicators specified.

Why use these metrics?

As well as making the testing process more efficient, which can help to control the cost associated with identifying and fixing an issue at a later stage in development, this approach can eradicate a common problem within the classic delivery pipeline associated with an application build.

This problem concerns the deployment and analysis of the tests that are conducted at different stages throughout a project. Typically, there is a stage of manual approval which involves someone looking at the testing results and manually comparing the information to determine if there is a problem and/or whether the build is better or not. This time-consuming task is made even more complicated when it involves unstructured data.

While organisations are monitoring and collecting more data nowadays, it is often unstructured and lacks context, making it difficult for software testers to analyse and identify the source of the data. Did it come from a test and if so, which test and at what stage? In turn, this makes it very hard to determine whether the build is performing as it should be or has improved.

This is where the benefit of SLIs and SLOs come in. By using SLI and SLO-based quality gates (milestones located at phases throughout the process which are dependent on the outcome of a previous phase), it is possible to automate & approve this process.

The software can also incorporate new objectives or adapt current ones through the life cycle of a project. If it is an intentional change, you can update the SLOs within your build. Alternatively, if it is unintentional – for example, you did not expect a particular outcome – it will make you aware of it. You can also track where the change happened because it will be automatically applied for the testing of subsequent builds.


What does this approach involve?

When you run functional, performance or API tests, you need to tag additional contextual information to allow the monitoring tool know that these results are coming from a particular test.

Using SLI or SLO-based quality gates, you can specify the metrics that are important to you – what you are looking for and which thresholds tell you that a build is good or bad. Open source projects, such as Keptn (an event-driven control plane which can orchestrate the execution of any type of test and evaluate SLOs against multiple SLI data sources as a self-service) not only automates this process, it also visualizes the results.

By comparing the metrics and thresholds you have set, it can show which of these elements are green (good), yellow (needs attention) and red (bad). It can also produce an overall score on the build, showing how good or bad it is. This visualization can highlight the problematic areas within the architecture of your application, showing where they are located – the front end, back end or both.

How to make the method effective?

There are a lot of factors to consider for incorporating this approach within your delivery pipeline. When do we run it? How much hardware is needed and who will run the infrastructure? How do we enable different workloads? To where do we stream the metrics and how do we analyse the results?

Thankfully, there are a lot of great do-it-yourself approaches which involve open source tools, as well as cloud-based solutions. The key thing to remember is that it will take time to get it right and it is important to test often.

It could be argued that deploying performance testing as early as possible within the process should be more of a priority for organisations – usually, it tends to be an afterthought. With the help of SLIs and SLOs, it is possible to embed it in the approach from the start and arguably increase the quality of your deliverables.

By looking at indicators early on, this approach enables you to work towards set goals, which can be validated with every single build, reinforcing a quality mindset and ultimately streamlining the way you test.

If you are interested in software testing and want to engage with like-minded individuals from across the globe, check out this year′s Quest for Quality conference. For more information and to reserve a spot, click here!

Have amazing topics worth sharing? Drop them on our Q4Q Knowledge hub, click here!

…and don’t be left out;

The Science of Software Testing

By Thomas Haver, Test Automation Architect at AEP.

Software Testing is similar to science and here’s what we can learn from the subject;

There is a marked difference between how people view science and how they view software testing – with many placing science on a pedestal – despite the fact that the two are quite similar, especially when you consider the methodology involved and the value they deliver.

Not only do both involve adapting quickly to new systems or environments, they both require a critical perspective which relies on looking at issues from multiple angles and take a step-by-step approach when it comes to identifying and addressing problems.

So, how exactly are science and software testing similar?

James Bach, software tester and author, said software testing relates to “questioning a product in order to evaluate it”, while software engineer and professor Cem Kaner described it as the “empirical technical investigation of the product done on behalf of stakeholders, intended to reveal quality-related information of the kind that they seek”.

Elisabeth Hendrickson, agile consultant and trainer, took this notion one step further, stating that software testing involves “designing an experiment to gather empirical evidence to answer a question about risk”.

Throughout these descriptions, two common themes that arise in relation to software resting are observing information and conducting experiments. Similarly, science is defined as the intellectual and practical activity encompassing the study of the physical and natural world through observation and experiment.

Due to the fact that the basis of software testers and scientists is similar, it is not surprising that the skills and personality traits required of individuals within these industries also overlap. People working in these fields need to be analytical and technical in order to fulfil the tasks associated with such roles.

Furthermore, both testers and scientists need to be naturally inquisitive and passionate. Moreover, in order to then be able to draw conclusions from experiments and findings, and transform these into actionable steps, testers and scientists must be reflective and communicative.

The processes involved?

Both software testing and science involve the close inspection of and solution of problems. Within software testing, this means not only thinking about the different ways in which users will interact with the given product but also finding unique ways in which the product can fail. Meanwhile, the scientific mindset is underpinned by a pursuit of knowledge which aims to illuminate the situation and obtain information.

Therefore, when reviewing at the process involved in both, it can be broken down into a three-stage process: collecting empirical evidence via observation; proposing a hypothesis and making predictions using that evidence; and running tests or experiments to refute or corroborate a hypothesis.

Through constant learning and continual assessment, testers can thus observe the behaviors of the product, identify risks, predict failures and test same. Similarly, through constant learning and continual assessment, scientists can make review on how the world works, make new discoveries and identify solutions to problems.

Can software testers add a more scientific structure to their methodology?

There are a couple of options in this regard, the first of which is peer review whereby the work performed is reviewed by peers to ensure it meets specific criteria and standards. Having a peer check the test strategy or review the tests designed and executed by others add rigour and structure to the testing process.

Another methodology that could offer an added layer of analysis is deduction and/or induction. The former means making specific conclusions based on general knowledge while the latter is where general conclusions are made based on specific knowledge.

Hypothesis Driven Development (HDD) is another concept worth considering within the world of software testing when thinking about the development of new ideas, products and services. Within HDD, one can make observations about the customer or their usage of the given product, from which a hypothesis is formulated. An experiment is then designed to test this hypothesis with empirical indicators which confirm or refute if the experiment has been successful, after which modification can be performed on the product or feature.

During the testing process, it is also important for testers to balance the expectations of both producers and customers. This means keeping all aspects in mind and trying to close the gap between the expectations of the producer, including the timing and budget of a project, and the expectations of the customer, including whether a product is fit-for-purpose and how enjoyable the user experience is.

In conclusion…

Science and software testing share parallels in premise and process, both relying on the concepts of observation and experimentation. Of course, there are ways software testing can be enhanced, with practices such as peer review and HDD. By adopting the more rigorous, scientific approach, the quality of software testing can be improved, alongside how such work and its value is viewed on a broader scale.

To learn more about software testing and related challenges, get early bird tickets to attend Quest for Quality 2020 Online Conference today. Find out more here!

Don’t be left out!