Rebuilding a Billion-Dollar Brand: How We Redesigned the WatchBox App
When you become the world’s largest anything – or close to it – your users will expect your website and apps to deliver an experience that aligns with your market…
Maximizing the effectiveness of your software requires a few elements to come together to deliver the best possible UX – while insights from platform analytics are incredibly useful, nothing is quite as impactful as direct user feedback from effectual user testing methods. Your userbase’s sentiments and desires behave as tools that, when leveraged correctly, allow you to refine your product to accommodate their best interests. As part of a successful iterative design process, it’s important to test your product before each release.
To get the most out of user testing, design labs take advantage of a structured methodology to uncover valuable feedback from test groups. Throughout the following, we’re going to start by touching on the importance of user testing, provide a general overview of how to conduct testing, and cover some finer points of effective user testing.
While user feedback is important throughout the app’s lifecycle, it’s perhaps most important when developing your MVP. The first step is to deliver a product that has just enough functionality to attract and keep users’ interest as it’s tested against any hypotheses you’ve formed.
Like any product or idea, apps are subject to the same rules of the scientific method most of us were first exposed to in grade school. Nike does this with their shoes – in 2012, an individual named Matthew Walzer who suffers from cerebral palsy wrote the company about making a shoe that would both offer the ankle support he needed as well as be something he could get on by himself. Over the last decade, Nike sent several prototypes to Matthew who provided feedback until the company perfected the design that’s now part of the Nike GO FlyEase line which caters to individuals with mobility issues.
In Nike’s case, such an order doesn’t seem like it would be that complex to fill, especially for a business that’s made shoes since the 1970s. Yet, it took them almost 9 years to solidify the design of this hands-free shoe. Without walking in a disabled person’s shoes – quite literally, in this case – they could only build on assumption. Building on conjectures is dangerous because even though your product may make perfect sense in your mind, it might not resonate with the masses.
A good example of this is the Quibi failure that occurred after founders dumped billions into the streaming platform only for it to fold shortly following launch. Quibi founders figured that people enjoy streaming platforms like Netflix as well as micro-content like the once-popular Vine platform so why not put them together? On paper, it still sounds like a great idea but as we can see, it fell way short of its mark, largely because they went all-in on their idea rather than start with an MVP.
Mark Rober, a former NASA and Apple engineer, often details the prototype phase for the things he builds on his YouTube channel like the extremely satisfying Jello pool experiment. Feedback from your prototype will tell you a lot about how your final product will likely perform. It’s crucial to use this knowledge to ensure the solution you provide indeed matches your theory of how it will be used. Skip to the end of the Jello pool video and think about what a disaster it would be if the goal of the experiment was to ultimately offer diving toys as nothing sinks!
Your first time testing your product also happens to be the one most important discovery phases. During a design sprint, we produce a prototype that is tested with a small group of users who produce all kinds of direct and indirect feedback throughout the session. By the end of the event, businesses have a solid understanding of how to make an optimal MVP and what results to expect.
User testing methods, whether running a formal design sprint or not, should all transpire in roughly the same manner. From a birds-eye view, a good testing strategy will incorporate the following steps.
It’s important to understand that everything doesn’t need to be perfected for a first (or next) iteration except for the most pressing matters, some of which are found by reading between the lines. For example, while a tester might report overall satisfaction with a certain process, other small details like their facial expressions and body language while engaging with a certain feature or the time it takes to complete an action might reveal opportunities for improvement.
Now that we’ve covered the gist of how to test a product, we want to look at some of the finer points of the process. By leveraging each tip below, you’ll be equipped to get the most out of a testing session.
Understanding how a test user is feeling at certain points while using an app is a good indicator of how someone might behave in the real world. Let’s say you offer luxury furniture so you anticipate some apprehension around checkout time – for some, it might be the cost while others might worry about delivery and setup. These are the kinds of things that you should plot out in a CJM.
You also partner with a financing company but chose to deemphasize this as a payment option since you expect the majority of your customer base to be more affluent. You initially think target customers might not take advantage of such a program, figuring it’s just a nice option to have for those who would otherwise struggle when using a credit card or paying upfront. Come to find out, when higher-income earners are aware of this option, they tend to purchase additional items. As such, you decide to increase the visibility of this function to improve the average transaction value (ATV.)
In the above example, it can be difficult to determine exactly how a test user would behave unless they were spending their own money. To understand how they’re thinking, make sure to ask them to discuss their thoughts as they’re using the product. When testing features like checking out, use open-ended questions to get them to describe how they feel without explicitly asking them how much they have in their checking or savings account.
By reviewing their verbal expressions against whatever process they were engaged with at the time, you’ll gain deeper insights into their experience. Even small asides like “that’s neat” or “what the hell” can provide valuable insight into certain elements of your product.
Matching your test group as closely as possible to target personas is key. Going back to the Nike example, the feedback would be much different had they relied on individuals without mobility issues to gather feedback. Much like the product that we developed for Sugarmate, getting feedback from those without diabetes and non-medical professions wouldn’t have been nearly as useful. Requiring test users to engage with a product under the pretense of some condition in which they have no direct experience only adds another layer of assumption.
When selecting test users, limit your selection to no more than five people. Science shows us that is the perfect number to collect sufficient feedback as anymore provides mostly redundant results and thus, minimal value.
Further, avoid professional testers, or at least refrain from comprising your whole test group such individuals. Your ideal group of users should be remarkably average as the idea is to get feedback from those who will naturally engage with a product. The greatest strength of veteran testers is also their biggest weakness – while they often produce the most detailed reports, the stereotype that many are too mechanical about the process often holds true.
Understanding usability requires seeing how a user approaches an assignment without specific instruction. An individual will inherently approach the process using intuition derived from their knowledge of how other apps function followed by any prompts you build into the product.
Should a user execute a task quickly without following any tangents, then the overall design likely needs minimal revision. However, when a user strays from how you expected them to engage with your product, it’s a good indicator that there’s room for improvement.
Some of the feedback you collect will be passive such as recordings of a user interacting with your product or analytics from their session but direct feedback needs to come through other channels. It’s vital to first hear unadulterated feedback in a one-on-one setting as some might hold back or change their response after listening to another tester in a group setting. Next, gather feedback in a group environment as conversations can spur discussions that might lead to insights that would otherwise not come up.
Test users will reveal a substantial amount of information about your product so organizing efforts will be paramount to ensuring that none of it goes to waste. It can be exciting to learn about new opportunities and validate certain assumptions but this information can stifle progress if it’s not approached methodically.
Your most pressing matters should be prioritized such that designers and developers can tackle the most severe issues first before moving on to other fixes, redesigns, or feature rollouts. If this is a user testing phase just before the release of an MVP, hold back on inclinations to work on anything but your core solution. Work quickly to get the product to market as soon as possible where you can build an audience and move on to your next phase of testing in the real world.
Our focus is aligning everything from usability through security to ensure that our apps offer a great UX and optimal functionality. We further celebrate “smart failures” because we know we’re not always right the first time – feedback gained from dedicated testing allows us to reevaluate our process by challenging our assumptions. Through listening and empathy we adapt what we learn to ultimately build the best possible product. To learn more about how we approach the development of cutting-edge software, get in touch with us.