Product Management

Expert Q&A: Identifying Assumptions in Product Development

By Michelle Gardner
Share on LinkedIn Tweet about this on Twitter Share on Facebook

Every product leader goes into a new project with untested, hidden assumptions. It’s important to identify and test these unconscious assumptions before you start validating (or invalidating) your product ideas—but how do you do this? Laura Klein, product leader and author of Build Better Products and Lean UX for Startups, recently answered this question and more in an on-demand webinar with Logi Analytics.

<< Get the ebook: A Blueprint to Modern Analytics >>

How do you define an assumption as it relates to product development?

Laura Klein: An assumption is something that we assume to be true. It is something that we take for granted. We make assumptions about a lot of things when we’re developing a product. We make assumptions about the problem we’re solving, the user we’re solving it for. We make assumptions that our solution is the best one. And some of those assumptions are perfectly fine. We might assume that everybody who is going to use our product has access to the Internet.

The important thing to understand is that assumptions aren’t all the same, and you deal with them very differently. I generally split them into three different types: problem, solution, and implementation. Is there a problem? Is my solution a good way to solve that problem? Can I actually implement that solution?

What’s an example of a problem assumption?

I want you to think back to the dark ages before Dropbox. What was the initial problem that they had to validate? They’ve had to make sure that a large number of people had this very specific problem—that they had multiple devices where they wanted to access the same files. Now before you roll your eyes and you say, “of course that was the problem,” I want you to try to imagine a time or a place where that wouldn’t be a problem.

Once upon a time people really only had one device (if they even had that), or they’d have their work computer and their home computer and they just did not share documents. The problem assumptions that Dropbox actually did need to go out and validate—because these were not obvious at that point—were that people have multiple devices and people want to access the same files across those devices. And this is hard for them to do currently. All of those needed to be true to make sure that this particular problem existed.

What about an example of a solution assumption?

I’m thinking back to the beginning of Dropbox again. What was their solution risk? Once you validate that people do indeed have multiple devices, that they want to access the same files across all of those devices, what assumptions do you need to validate about how you’re going to solve that problem? The assumptions they had to validate were that people were willing to download something onto their computers, phones, tablets, etc. And then just trust that those files would be safe in an account in the cloud. It seems really obvious now that it’s done, but it was not originally.

How about an example of an implementation assumption?

In this case, Dropbox had a really interesting challenge. Namely, could they sync files across multiple devices and operating systems quickly and accurately enough to make customers happy? Unlike many products where implementation details might not be merely this hard, this was a fairly non-trivial task, technically speaking. As with the other assumptions, if they hadn’t been able to validate this, Dropbox would not have been nearly as successful.

The implementation assumptions that had to be validated were, the product could not just be built, but built in a way that was quick, accurate and secure across multiple kinds of devices. Also, there’s that last one—that’s the human one and it often gets ignored. Even if you can build the magical syncing software and you know people need it, can you get them to download the software and start using it? Can you get people to upgrade to the paid tier? Can you get people to adopt it? Can you get them to change their behavior?

How do you identify these assumptions?

Here’s a surprisingly easy way to identify assumptions. I want you to fill in this blank: “For my product to succeed, blank must happen.” Or, if you prefer more dire things: “If blank doesn’t happen, my product will fail.” Write down at least 10 assumptions about your product. Each assumption goes on a sticky note. Then you’re going to categorize your assumptions. Write a “P,” “S” or “I” on each of the sticky notes just to indicate what type of assumption it is. Make sure you’re not assuming the solution when you’re stating the problem.

After you’ve figured out what all of your assumptions are, I want you to divide them into a “Will kill” pile and an “Other” pile. “Will kill” means, if this is true, it will literally kill your product. Next, I want you to take all of the “Will kill” items and rank them into the most to least risky. You’re going to have to make some hard decisions here. Here’s the other trick. If you’re resource-constrained, I want you to destroy the “Other” pile. If it will not kill your product, don’t worry about it yet. It’s not important yet. It will be, but it’s not your riskiest assumption.

What makes an assumption risky? And why does it matter?

The invalidation of one of these assumptions can kill your product or feature. That’s why it matters. If that is not a valid assumption, there is no use for this product. It is gone. Sometimes your riskiest assumption is going to be about the problem you’re solving. Sometimes it’s about your solution for it. Sometimes it’s about the actual implementation. I don’t know which one it is for you. That is something you need to answer for yourself. That is different for every single product, at every single different phase of its creation.

The goal here is to design some way to decrease the risk that you’re building on top of a shaky assumption. And that means testing just how shaky that assumption actually is.

Learn more in the full on-demand webinar with Laura Klein >


Originally published November 8, 2019

About the Author

Michelle Gardner is the Content Marketing Manager at Logi Analytics. She has over a decade of experience writing and editing content, with a specialty in software and technology.