Data up and data down

By product management guru and Boye & Co group leader Otto de Graaf

Product managers have been called ‘the CEO of the product’ by Ben Horowitz. While you may agree or disagree with that statement, product managers, like CEOs, make decisions that have a long term impact. Even after you moved to another job years ago, the decisions you made back then are likely to still affect the product and its users today.

With that daunting thought in mind, no wonder product managers like to use whatever data they can get their hands on to substantiate their decisions!

In our most recent series of Product Manager peer group meetings in London and Zurich, we discussed how to gather and use bottom-up data like user metrics. The content in this post represents the collective thinking and brainpower of the people that attended these meetings!

Data galore!

As more and more applications are delivered from the cloud, it’s easier to get much more data from real usage of our products. Many tools are available to developers, data scientist and product managers to gather data and make sense of it.

Usage data can be invaluable for understanding where your product is great, and where users may be struggling to achieve the results they want. Good news, right? Well, yes and no…

Data governance

To actually get valuable data does require discipline. This is the boring and tedious part of the job. Developers need to add code that records users’ actions, so measuring becomes possible to start with. And they need to do this for the user actions that matter to your product. Agreeing on what to measure needs to become a standard practice with your development teams.

Additionally, you need to agree on how you measure. For example: multiple developers could log events with very similar names. To make sense out of the measurements you need to agree on a naming convention so you actually understand what you’re looking at when doing the analysis.

The bigger your product, the more data you get from it, and the need for governance increases. A product manager of a very widely used productivity application once told me that while they enabled metrics everywhere in their app, they only switched them on in areas they were actively working on. Otherwise the amount of data would simply be too much to store or analyze.

While this may seem like a luxury problem, don’t underestimate the amount of time you will need for governance as well as transforming data from multiple systems into a format that is usable for different analytical purposes.

Say whaaat?

Having data is great. Interpreting it is a different ballgame though. How do you know that changes in metrics mean your users are now happier or less happy, or more or less successful? And how does your users’ success correlate to your own business success?

In some cases, this is easy. When your application handles some kind of transaction (like an ecommerce application), you can directly correlate user actions to your own business success. After all, a completed transaction means the customer found what they wanted and paid you for it. 

Most applications are non-transactional though, which makes business success harder to quantify. If users lingers longer on a particular screen, does that mean they are engaged? Or are they not understanding what they need to do?

In these scenarios you need to correlate user actions to some other measure of business success. For example, changes in user metrics can be related to an NPS score, a decrease in call-center issues, repeated usage of your product or any other metric that measures your business’ success. A useful framework for this is the HEART framework developed by Google

Whatever metrics you choose, you need to know whether or not the changes you propose will positively impact your business’ results. While there will likely be a lag between making a product change and a shift in those business metrics, establishing a link between them is critical if you want to understand the impact of your decisions as a product manager. 

Error – data not found

With the Facebook - Cambridge Analytica data scandal still in our minds, data gathering may not always be easy. In regulated industries, the gathering of data may be outright prohibited. Generic regulation like GDPR is influencing what you are allowed to measure. 

The characteristics of your product may influence what you can measure too. For example customers of a cyber security product are less likely to share data than the users of a content management system. For on-premise products or even physical products like industrial machinery, you may not be able to get data at all.

There are ways around some of these limitations. You could provide an add-on product that adds value to the customer, but also allows you to gather data. For example in B2C you could create an app to order coffee that is linked to your coffee machine. In B2B you could leverage the Internet of Things to provide an automated maintenance service to your customers based on sensor data.

When data sharing provides value to your customers, they will be much more likely to indulge you.

Qualitative or quantitative?

For software business models with high volume and relative low value, like many consumer products, more data will available. For enterprise B2B applications, which tend to be low volume and high value, less data will usually be available.

Whether you have little data to go by, or lots of it, it is good practice to combine quantitative analysis with qualitative research. ‘Old-fashioned’ input provided by UX research, customer panels, verbatim feedback, complaints, feature requests, bugs and so forth remain valuable as input and validation for quantitative data.

Lies, damn lies and statistics

Advances in machine learning and artificial intelligence enable product managers to get much more insight into large data sets, and discover things that would have otherwise been impossible to detect. However, the data scientists that attended our sessions stressed that common business sense is still very much needed to interpret the results. One mentioned they used machine learning to generate a white-box algorithm rather than just relying on a black-box algorithm, so discussions with the business would be easier.

As a bonus though: if your algorithm proves to be trustworthy, you can use it to test and predict the effectiveness of changes you are planning. You ‘simply’ feed the model simulations of what users would be doing, and judge the predicted outcomes by their effectiveness. While this may seem rocket science to some, it shows the potential of data driven product design. 


In the end, using data and relying on it is very much part of an organizations’ culture. Some organizations were born digital, and using data is an intrinsic part of how they work. For others, using data will need to be injected into their DNA as part of a larger and longer transformation process.

In our next post we’ll talk about how gathering top-down input (strategy, macro trends etc), and about how you can bring top-down and bottom-up input together.