Driving product decisions with user research interviews

Wednesday 1 Dec 2021 14.15

How do you spell Hummus?

User research is a growing topic in product management, and it seems every year, Product Managers and Product Owners dive deeper into the world of user research. Seemingly, the large overlap between the two fields is ever growing.

It makes sense that it’s growing. Good user research can save your company money, ensuring you build only the solution that’s perfect for the customer. But there is a dark side. If you overdo it, user research can cost you more money than you make back– it can be a lengthy and expensive process. Considering that literally everything can be validated, doesn’t mean that literally everything should be validated. Sometimes it’s better to just build it and see how customers respond.

So when do you skip the entree and go straight for the mains?

Those answers I can’t give you, because like the spelling of the word Hummus (houmus? hoummos? hummus? humus? hommus? homus?), it’s always different. But what I can do is explain a previous example of mine, whilst elaborating on the decisions my team made along the way, and why we made them.

Scheduled payments

During my time at Europe’s biggest mid-term rental marketplace, HousingAnywhere. My team had been working on an embedded finance product which we named HousingAnywhere Payments.

HousingAnywhere Payments relieves property managers from the burden of collecting their rental-related expenses from tenants. Instead, leaving HousingAnywhere with such tasks. It would automate things like: calculating the payment amounts, determining the due dates, pulling the money from the tenant when it’s due, notifying all parties if there’s a late payment, and more.

We had been experiencing a lower than expected success rate for our payments, and we had a hypothesis that it was due to the tenant needing to be on-session to make payments.

Essentially, we didn’t have feature parity with just a regular old bank account. Tenants couldn’t schedule their payments, but with a regular bank account it was possible to schedule standard bank transfers directly to landlords. There was plenty of evidence to suggest that the limitation of on-session payments was the issue, and we’ll both save time here if you choose to just take my word for it– It could be its whole own blog post.

Essentially, our assumption was that tenant’s want to schedule all their rental expense payments in one-go, instead of visiting the platform for every due payment. Our hypothesis was that if tenants could schedule payments to happen off-session, there would be a higher payment success rate.

Based on our data, and what the customer solutions team was telling us, we had a high level of confidence that this scheduling feature was valuable. Not long after we identified that the project was of a medium level technical complexity risk. In other words, we had a good feeling the problem was real, but it would be moderately hard to solve.

Another factor was that we didn’t know how to solve the problem, and we had too many conflicting ideas about it. Not only that but we struggled to find similar solutions to benchmark against.

Because we needed more information on the details of the problem, coupled with the medium technical risk, a full user research process made sense.

In summary, we had

  • High confidence the problem statement was true– we were pretty sure it was valuable
  • Low confidence on how the solution should be implemented– we had no other viable examples to benchmark against, from competitors or otherwise
  • Medium technical risk– it would be relatively costly to build so we should be sure of the feature before beginning

If we had a low technical complexity, and medium or high confidence in how the solution should be implemented, then probably we would have just validated our solution with the customer solutions team then gone ahead and built it. But given the mix of confidence and risk, it made sense to us to to go through a full user research process.

So during this process, what did we need to learn from customers?

  1. That tenants were already scheduling rental payments with other tooling and their motivations for doing so
  2. Did they also want to Pre-authorise payments on the category, payee or individual level?
  3. Did they also want to pre-authorise payments with different payment methods?
  4. Did they also want to pay some payments on-session, alongside the automatic ones?
  5. Did they also want to view previously paid payments?

Preparing the interview assets

The first step was to define all of the assumptions, and we did this by leaning on some of the other specifications we had created. We collated all the possible user stories for the feature, categorised them, then looked down the list and tried to identify which we were the least sure about, highlighting them red.

Considering we had high confidence that our problem was real, we decided to get straight into the design process. If we weren’t so sure, we would have done a preliminary interview– to be totally sure of the problem statement.

So we went ahead and created the prototypes with Figma.

During the design process, we ran into more questions, those questions plus the already identified problems, were added to the user stories list. These user stories would then drive our interview script!

Then finally, we created a structured way way to collect notes during the interviews themselves, we ensured they aligned with the interview script and in the same order, so it was easy to collect notes, and even easier to read them back.

Finding Candidates

Then we decided on the criteria for the interview candidates, and begun the hunt! We put together an email template, to be re-used for all candidates.

If you struggle to find people, like we did, you may want to give vouchers to people who do the interview and help.

Hosting the interviews

It can be difficult to have a good interviews, so before we did any real interviews we first did a test interview with a staff member. The test interview gave us a bit of practice and helped us refine our process a little too.

We found it useful to have one leading the interview, and up to two other staff members taking notes throughout the interview. Between candidates, everyone should get a turn at taking notes and leading the interview. In fact, I beleive it’s vital that for at least one of these interviews, the note-takers role is taken by the designer that will be working on the feature– it makes it real for them.

First, we wanted to learn about the candidate, getting their background as it relates to our company’s mission. Questions like what’s your rental history? What motivated you to move from house A to house B? and so on.

Then, we wanted to validate the problem statement we were so confident about. It pays to be broad here, so we asked something like “How do you pay your rent?” instead of “how do you schedule rent payments”. It leaves more room for discovery and if they mention your problem without you seeding it, it only further validates your problem statement.

If we did need to dig though, we would try only speak in questions and lead the candidate toward what we were looking for. For example, “How did you ensure your rent payments weren’t late?” gives us room to discover other solutions, maybe some tenants just want a reminder to pay without automatic payments. We may have never found this out if we made the questions too specific.

After learning more about the candidate, and validating the problem statement, it was time for us to discovery the details of the solution. We had them open the prototype on their computer while sharing their screen.

For each task, in the following order, we;

  1. Explained the limitations of the prototype for that specific screen
  2. Asked them to describe the screen they’re on (before they interacted with it)
  3. If there were any new components, we ask them to explain the purpose of the component
  4. Gave them a task tied to an example scenario, for example “You lost your credit card and need to add another payment method, how would you do this?”
  5. Gave them space to fail and kept our mouth shut– it’s harder than you think sometimes!
  6. Asked them about the outcome after they took the action successfully, following up with what their expectations of what happened in the background during their interaction (emails triggered, card details saved, etc.)
  7. Where applicable, asked them how they might undo this action, getting them to navigate to that specific area.
  8. If they made any decisions along the way, we asked why, for example “why did you choose credit card as a payment method?”

In total hosted 8 interviews, afterwich we looked over the notes, and tried to find patterns.

The findings

We found variation in the types of tenants and how they managed their rental payments. For example students preferred to pay manually, whereas young professionals preferred automation. This can be attributed to their differing financial situations, having less money usually means you’re more careful with it.

Tenants also expressed that scheduled payments are better in the long-term rentals context. The overall effort saved for scheduling only 3 monthly rent payments didn’t show there was a lot to be gained.

Our success screen was very unclear, we showed a timeline, which people didn’t understand at all.

Tenants wanted to schedule payments on a category level, mostly with the focus on rent. Other expenses like utilities were perceived as variable, and tenants weren’t comfortable pre-authorise unknown amounts of money.

Cheers,
Elliot...