Exploratory testing for Launchpad
The Launchpad team is introducing exploratory testing into our development process. While our extensive test suite gives us an assurance of the quality of our code, we use exploratory testing to gauge the quality of the human experience of a feature.
Objectives of Exploratory Testing
In using exploratory testing, we hope to learn about a feature’s:
- ease of use and discoverability
- completeness
- quality of implementation
- suitability to the problem it is solving
conformity to Launchpad principles and UI guidelines
When to request Exploratory Testing
If your feature has a LEP then it requires exploratory testing.
Requesting Exploratory Testing
In general, if a there's a LEP that used the user research process, then it needs a exploratory testing session. As a reminder, this means that one of the following applies:
- you are adding a new feature
- you are reworking an existing feature
- you are extending or changing the workflows of an existing feature
- you have spent more than thirty minutes talking about the change without doing it.
Initial discussion
You should speak to the Product team once user-visible are about to be available for testing on qastaging/staging, to ensure they can schedule the work you need.
Please include:
- User stories in the LEP
- Information about required permissions to use the feature
Process
As the tester you want to put yourself in the place of the target user for that feature. Read the LEP and try to use the feature as described by the user stories on it. While exploring the feature, make notes of everything that catches your attention, no matter how silly it might seem. Make notes how to reproduce issues as soon as possible.
It's good practice to time box the exploratory testing session. Time boxing gives you focus to explore the feature, as well as limits wasting too much time exploring something new. Start with 25 minutes as suggested in the pomodoro technique. Add more pomodoros as you see fit.
Keep in mind UI inconsistencies. If something is done in two ways, check if either meets our guidelines. If it’s difficult to understand, it might need better documentation or UI text.
Use this template as a starting point to write the report and scribble down your notes.
In the recommendations section of the exploratory testing report, consider patterns that would systematically improve quality of the project and recommend using those or adding such infrastructure. For example, a fix for 758976 would help with the specific translation page issue but the same problem happens in all AJAX notifications. Also consider if the bug is in the feature or in the widgets used to make the feature and how the developer made this mistake in the first place. Is there a way to make it harder to make that mistake?
When writing bug reports, describe the expected behaviour, the observed one (use and abuse of screenshots!), how to reproduce it and more importantly why do you think it’s a problem as sometimes it might not be obvious to the developer. Tag all bugs found during the exploratory testing session with the ‘exploratory-testing’ tag and the feature tag. See bug triage guidelines and set the importance accordingly.
Schedule a call with the team lead and go through the list of issues found during the exploratory testing session. Call it ready for deployment or iterate again as per feature development checkpoint
Supporting tools
Having more than one person testing at the same time, talking to each other seems to work well. Maybe run the session using Mumble? Record your Mumble call.
Record the exploratory testing session and edit it afterwards to be used in bug reports or to show how to reproduce bugs. Or at least use it as a substitute for memory.
Supporting Documentation
James Bach exploratory testing article
Martin's blog post about Rusty’s Russel interface design talk.