R&Y Blog

MVP Sprint: Day Three

This is the fourth of five articles on our process of planning out an MVP Product Launch. Make sure to start your reading with MVP Design Sprint Overview first.

We ended day two by creating a storyboard for our concepts, including detailed descriptions of what each screen would look like. We also took the user flow and sketched out the screens we wanted to test.

In this phase, we will build, test, and analyze the results of our testing.

Prepare

By now, you know your user flow and will have gathered data to create that prototype: branding assets, content, copy, flow, etc.

The next thing we do, even before building the prototype, is to reach out to and recruit test users. You want to identify, recruit, prep, and schedule testers as early on in the process as possible so you can do what you set out to do, which is to get feedback.

Your testers should experience the problem that your app is seeking to solve, but they should be uninitiated and unaware of the design concept that we’re proposing. We want unbiased and impartial feedback from our target users.

As for numbers, five is the ideal number of test users. This article brilliantly explains that fifteen is the optimal number of users, but they prefer an iterative design with multiple tests, with groups of five in each iteration. The premise is that when faced with the likelihood of budget constraints, you reach a point of diminishing returns as you add more test users to the mix.

The final step is to schedule these test users to test the prototype. If we start our design sprint on a Monday, we schedule these tests toward the end of the week, so we have a few days to prep, design, and build.

Prototype

In this step, UX designers will take the storyboard and create a design in their tool of choice (e.g. Figma). We recommend using true copy, brand colors, content, etc. (no Lorem Ipsum)

You want the prototype to be as realistic as possible. It doesn’t need to be hi-fidelity and fully branded, but it should be close (unless you’re specifically testing the branding)

The goal is to rapidly create something that demonstrates what you’re trying to test. That said, it needs to be a working, functional design. Users should be able to click on it and progress down the “happy path” that you’ve set for them.

Test

By now, you’ve built out an interactive prototype that’s ready for testing. Before you release the prototype to them, you’ll need to do some basic housekeeping:

  • Since we run our tests remotely, we need them to have strong wifi, log in with an actual computer, and be in a quiet place
  • Make sure their security settings allow them to share their screens
  • Ask their permission to record the session
  • Share the link to the interactive prototype and ask your tester share their screen

To make the testing more productive, we assign specific tasks based on the objective we want to accomplish. The user test flow governs how they’d interact with the prototype.

When you share it with your test users, you want to emphasize that we are testing the app, not them. To that end, we encourage them to think out loud. We ask them to express their experience without leading them to a specific answer. It can be helpful to let them know any thought is helpful. If they love the colors, tell us. If they hate a button, let us know. Can’t find something? Whatever it is, please share it. We assure them that they won’t hurt our feelings if they tell us something that they don’t like. It’s common to say, “If you hate it, tell me.”

Our tasks would be based on a sprint question: “Can we create a simple ‘Pause Subscription’ feature?” The task would be for them to pause their subscription — without guidance from us. Users may go to the Profile link instead of the Settings link. You’ll want to take notes of this. If everyone goes to Profile to pause their subscription, that’s a strong indicator for you to place the button there.

Testing takes only a few hours and reveals insights we can analyze and incorporate into our assessment.

Analyze

While users are going through the test and we ask them to complete tasks, we observe and document feedback. We’ll also observe where they went to click to perform that task. If they had to click around, we would take note of that. We’re taking notes on different users and identifying commonalities. In an ideal world, one person would lead the test, and another would take notes, but that’s why we record it.

Very similar to how we organize “How Might We” statements (explained in our article MVP Design Sprint: Day One), we use sticky notes to document and arrange the feedback into categories e.g. “User couldn’t find ‘Pause Subscription’ button.” If multiple notes say the same thing, we can see the commonalities between users and answer questions we had for the sprint.

We pull out the major themes that show up and give a recommendation. “Four out of five users couldn’t find the pause subscription button. They said they would have gone to the Dashboard to perform that task. Our recommendation is to …..”

We want to analyze, report, and recommend all of the sprint questions that we set out to validate. “Here’s what worked, what didn’t work, what your users said about it, and here’s what we recommend as a next step.”

At this point, the traditional design sprint is complete, and an executive summary will be delivered.

Most clients would then look for an engineering firm to develop the MVP we came up with. In our case, we’d recommend a shorter feature prioritization workshop and beginning the development planning.

Conclusion

Everything we’ve done so far has brought us to this point.

We designed and prototyped a concept that the team felt could solve the initial problem. We gave our target users something realistic and functional that they could interact with. We collected data, garnered feedback, and analyzed patterns. And we mapped out the next steps based on our testing. As a result, we have data that informs us on what to build first, what to roll into the product in the future, and how to present it to users, so they want to use it.

The end deliverable is a design document that we can hand over to engineers so they can bring it to life. In our case, we begin bridging the gap between design and development, starting with a feature prioritization workshop. That’s in the next article, so keep an eye out.

Want help breaking these steps down? Sprints are always more fun together; let us know how we can help!

[This is a high-level overview of the steps that we take. For more information on the principles and exercises based on our process, refer to the book “Sprint” by Jake Knapp.]