Product 05 May 2014

Scenario Testing: Four Tips on How to Manage Effectively

Robin

This article was originally written for Software Testing Professionals.

Testing software has always been complex. The minute you add more than a handful of features to any system, theoretical complexity sky rockets.

All the buttons to click, links to follow, client browser versions, client bandwidth and what have you, will soon add up to a near infinite number of things you’d need to test. At the same time, actual users will only engage with a small fraction of those features.

But how does one bring some order to this complexity when, barring a few exceptions, 100% test coverage is basically impossible to achieve?

One approach is to turn to scenario testing. This is where you use a real or hypothetical story that describes how a user actually uses the application. It may sounds very similar to a test case, but a test case is typically single step, whereas scenario tests cover a number of interconnected steps.

A good scenario is one that is based on a credible story of a user performing a complex task. The scenario should be:

  • Critical to all stakeholders (i.e sales, marketing, management, customer support)
  • It should be obvious that the scenario must work as expected
  • The scenario must be easy to evaluate

Scenario testing is primarily thought of as a tool to facilitate feature testing, but performance is sometimes part of that.

Since application performance is very often a non-functional requirement, satisfactory performance is often assumed and lack there of is considered a bug - even if it was never mentioned in the requirements.

Therefore, scenario testing can be used to uncover important performance bottlenecks as well as test features.

Consider this test case for an e-commerce site.

Test case #1: Add valid coupon worth X%

Steps:

  1. Add one or more products to the cart
  2. Go to the checkout page
  3. Add coupon code ‘TEST123’ and click ‘Add coupon’

Expected result:

  • Page refreshes. Message "coupon successfully applied” is visible
  • Discount clearly indicated in the cart summary section
  • Cart total is reduced by X%

Now, imagine that the test case is performed and confirmed to work during development. One of the testers makes a note saying that sometimes, the page refresh takes 4-5 seconds when you have more than 10 products in the cart, but it’s not considered a major issue since it’s affecting a very small number of users.

Now, consider an actual user, Lisa, as she uses the e-commerce site:

Lisa gets a 15% coupon code for an eCommerce site she’s used before and really likes. She decides to order a few things she needs and also asks her mother and sister if they need anything. While she’s shopping, she talks once with her mother and three times with her sister to double check she get’s correct amount, sizes and colors of all items.

After about 20 minutes, Lisa has 20 items worth \$900 in her cart. She hits the checkout page where she enters the discount code. The page seems to be doing ‘something’ but after 5 seconds with no visual feedback, Lisa decides that it’s most likely expected behaviour and hits ‘Pay now‘ to proceed. She’s a little worried that she can’t see her discount on the screen, but assumes that it will be presented on the emailed receipt.

Five minutes after completed checkout, she receives the receipt and realizes that she didn’t get the discount. At this point, Lisa feels let down and decides to try to cancel the order. Maybe she will try again later, maybe not.

The story of Lisa’s real world shopping experience makes a great base for a test scenario. A credible story of a user performing a complex task. It highlights to relevant stakeholders - like sales, marketing, management, customer support - that it’s important functionality that really needs to work.

It is, of course, possible to write a few test cases that would capture the same performance issue, but by putting the steps into a realistic and credible context, the coupon code response time suddenly stands out as an important issue.

It suddenly becomes easier to spot and it becomes apparent that, even if it’s a small fraction of all http requests to the server, it will likely seriously affect a customer that wishes to make a rather large transaction. Which, I would like to point out, was the main reason the marketing/sales team wanted to distribute the coupon code in the first place.

Finally, since scenarios are much easier to understand for people outside R&D, it’s easier to involve everyone with an interest in the project. In most organizations, stakeholders such as sales and marketing, customer support and management will find scenarios much easier to grasp than a long (and sometimes boring) list of small test cases.

The challenge is, of course, to find correct and credible stories that both motivate the important stakeholders to participate and at the same time covers as much of the application as possible.

Performance testing can benefit from a scenario approach in many ways. One of the most obvious benefits is that creating scenarios helps to highlight the important application flows that must perform well - just as the coupon code scenario above shows.

Test configurations can then be more focused when we know what the most important areas are. And since scenarios are stories that are easier to understand, it’s also easier for non-technical people to be part of the prioritization work, making sure that first things come first.

Another great benefit that can come specifically from performance testing multiple complex scenarios at the same time is that it can unveil dependencies.

Let’s say that one problem area with an e-commerce web application is slow internal search. While that’s a problem on it’s own, it’s not unlikely that if affects overall database performance. That in turn can affect more important functionality that also uses the database - like registration or checkout.

When applying the concept of scenario testing to your performance testing efforts, here’s a few things keep in mind:

  1. Consider using scenarios in your performance testing. Use tools such as Google Analytics to analyze what paths users take through your site to help you come up with credible and critical scenarios.
  2. Prioritize possible scenarios by thinking how valuable each scenario is. A user browsing your products is good, a user that checks out and pays is better. Make sure you cover the most critical scenarios first by ordering them according to how valuable they are to you.
  3. Consider using Continuous Integration tools such as Jenkins or TeamCity to automate performance scenario testing. An automated test that gives you pass/fail results based on response time is very easy to evaluate.
  4. When the number of scenarios grow, group different ones together based on what part of the system they test. Or group them based on complexity, making sure that all low complexity tests pass before you run the high complexity ones.
< Back to all posts