Are you guerilla testing or dogfooding? A guide to software testing for mHealth services

May 8, 2018 | mHealth | Global | Tobias Wacker

This entry is part of a series of blogposts. In a previous entry, we explored the fundamentals of effective product management processes, including team composition, iterative development and discussed how mHealth services can adopt similar processes.

Software testing and quality assurance are an important component when developing digital services. Large technology companies employ specialized testing engineers, and many have dedicated testing departments. One of the most popular modern software development processes, so called test-driven development, is based on the premise of building tests before building the actual software. There is even an international competition for software testers, the TestingCup. Yet, in our experience, few mHealth services dedicate much thought towards testing their services on a regular basis. Over the years, we have seen many live mHealth services that suffer from bugs and usability challenges.

There are several reasons why this is, most notably a lack of resources and experience with rigorous testing protocols. Some VAS providers test their services using scripts. Here, an employee completes several scripted tasks and assesses whether the task could be completed successfully. In our experience, this approach does not sufficiently test mHealth services. The tasks tend to be too specific to catch most bugs, the employee is already an expert in using the service and the tests often occur under optimal conditions.

In this blog post, we are discussing two approaches to software testing, guerilla testing and dogfooding that mHealth services can implement with little resources and that require little or no prior experience. Furthermore the approaches proposed in this blog post focus on testing services under real-life conditions.

Guerilla testing forgoes expensive testing labs and lengthy participant recruitment. Instead, the researcher approaches strangers and asks them to use the service. The researcher observes how the stranger uses the service, takes notes of struggles and unexpected behaviors, and asks a few relevant questions at the end. Afterwards, the researcher reports back to the team, which discusses the findings and makes changes to the service according to the findings. Importantly, the researcher can be anyone working on the service.

Some questions to consider when planning guerilla testing:
 

  • What should we test? In most cases, focus on a specific element of the service, such as registration, updating a user profile, payment etc.
  • Do we need a discussion guide? Not really. Simply ask participants to use the element of the service that you want to test. If they struggle, ask why they struggled. If they do something unexpected, ask why.
  • How many participants do we need? Testing with five participants per round will usually suffice. It is more effective to test with less participants, consider the learnings and then test another round.
  • Who should we test with? People that represent your intended target users as closely as possible. For example, a mHealth service that focuses on pregnancy should be tested with pregnant women.
  • Where should we conduct the test? Wherever you can meet your intended target users. For example, you can meet pregnant women at clinics and at hair salons to conduct guerilla tests.
  • What device should we test on? Ideally, the participant’s own device. If that is not possible, a company device will do. A service that targets mostly basic phone users should be tested on a basic phone. If the test requires airtime usage, always reimburse the participant for the airtime used – and add a little extra as a thank you.
  • When should we conduct guerilla tests? Regularly, to ensure that the service is performing as it should. Always conduct guerilla tests if data shows unexpected usage patterns. Always conduct tests after changes were made to the service.

 
UX Booth has a great guide to guerilla testing here. We especially recommend the section on “Employing the technique”. Raghav Haran shares a fun example of guerilla testing here.

The second approach is even easier: Dogfooding, short for “eat your own dogfood,” is a common slang term referring to the practice that all team members should use their own service. While this might sound obvious, few mHealth services implement this practice. The rules are simple: Everyone who works on a given service must use the service on a regular basis, including managers. If a team member encounters a bug, usability issue or any other problem, the team member reports the issue so that it can get fixed.

Dogfooding has two primary benefits:
 

  • If there is a bug in the system, someone in the team will likely encounter the bug and squash it quickly. It is good practice that some team members are dogfooding beta versions, hence bugs can be discovered and squashed before the final version of a service is released to the public.
  • It can be easy to dismiss usability issues as minor inconveniences. However, when the team uses their own service regularly, these minor inconveniences quickly become major annoyances that will be improved. Hence, dogfooding often improves the usability of a service.

 
This blog posts provides a quick overview of two testing methods that mHealth services with few resources can deploy. There are many online resources and books available that allow mHealth services to further explore the subject and develop a testing process that makes sense for them:
 

  • Erika Hall’s book Just Enough Research provides a practical guide to developing everyday testing capabilities in an organization that does not have the resources to hire dedicated researchers.
  • The podcast Mixed Methods shares insightful interviews with user researchers working for some of the most successful technology companies in the world:  https://www.mixed-methods.org/episodes/
  • Medium offers a wealth of useful articles around the topic of software and usability testing: https://medium.com/tag/usability-testing

 
The GSMA mHealth programme in partnership with frog developed the mHealth design toolkit – a collection of ten principles to launch, develop and scale mobile health services.

Explore the resource

This project was funded with UK aid from the British people.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back