PolicyAndProcess/ExploratoryTesting

Not logged in - Log In / Register

Exploratory testing for Launchpad

The Launchpad team is introducing exploratory testing into our development process. While our extensive test suite gives us an assurance of the quality of our code, we use exploratory testing to gauge the quality of the human experience of a feature.

Objectives of Exploratory Testing

In using exploratory testing, we hope to learn about a feature’s:

When to request Exploratory Testing

If your feature has a LEP then it requires exploratory testing.

Requesting Exploratory Testing

In general, if a there's a LEP that used the user research process, then it needs a exploratory testing session. As a reminder, this means that one of the following applies:

Initial discussion

You should speak to the Product team once user-visible are about to be available for testing on qastaging/staging, to ensure they can schedule the work you need.

Please include:

Process

As the tester you want to put yourself in the place of the target user for that feature. Read the LEP and try to use the feature as described by the user stories on it. While exploring the feature, make notes of everything that catches your attention, no matter how silly it might seem. Make notes how to reproduce issues as soon as possible.

It's good practice to time box the exploratory testing session. Time boxing gives you focus to explore the feature, as well as limits wasting too much time exploring something new. Start with 25 minutes as suggested in the pomodoro technique. Add more pomodoros as you see fit.

Keep in mind UI inconsistencies. If something is done in two ways, check if either meets our guidelines. If it’s difficult to understand, it might need better documentation or UI text.

Use this template as a starting point to write the report and scribble down your notes.

In the recommendations section of the exploratory testing report, consider patterns that would systematically improve quality of the project and recommend using those or adding such infrastructure. For example, a fix for 758976 would help with the specific translation page issue but the same problem happens in all AJAX notifications. Also consider if the bug is in the feature or in the widgets used to make the feature and how the developer made this mistake in the first place. Is there a way to make it harder to make that mistake?

When writing bug reports, describe the expected behaviour, the observed one (use and abuse of screenshots!), how to reproduce it and more importantly why do you think it’s a problem as sometimes it might not be obvious to the developer. Tag all bugs found during the exploratory testing session with the ‘exploratory-testing’ tag and the feature tag. See bug triage guidelines and set the importance accordingly.

Schedule a call with the team lead and go through the list of issues found during the exploratory testing session. Call it ready for deployment or iterate again as per feature development checkpoint

Supporting tools

Supporting Documentation

PolicyAndProcess/ExploratoryTesting (last edited 2011-06-22 16:31:31 by matsubara)