top of page
Writer's pictureGeoffrey Charles

The Product Ownership Checklist

What does it mean to own a product, really?


Product Managers are often seen as responsible for building new products and features for customers. But the reality is that maintaining existing functionality is equally important, if not more. Reason being, you can’t build anything new or scale your company if your existing product is not working.


In fact, ensuring your product works could just be most of your product strategy. Take Google Search for example. From 1998 until now, it’s been largely the same product: a search bar. The challenge is ensuring that the product works correctly. And that is no small feat — the number of websites to index has skyrocketed since launch from 2.4 Million in 1998 to 1.6 Billion today.


This post provides a checklist every great PM should use to ensure their product is working.


4 questions every PM should know the answers to

In order to ensure your product is working, you need to be able to answer the following questions:

  1. Understand: What is my product intended to do?

  2. Measure: How do I know if my product is doing what it’s intended to do?

  3. Assess: How well is my product doing what it is intended to do?

  4. Act: How well do I respond when my product is not doing what it is intended to do?


1. Understand: What is my product intended to do?


Without understanding what your product is intended to do, you cannot monitor whether it is working as intended and improve it. Simple.


And yet many PMs do not understand how their products work as much as they should. The reason for this is that they likely inherited the product, have not taken the time to understand it, or delegated understanding to others in the organization. Don’t make this mistake.


To fully understand the product, you need to:

  • Understand the customer experience. This means every acquisition channel, touch point, flows, states, lifecycles, segment, etc. To do so, dog food your product. Shadow customers. Hopefully this is already documented — if not, document and leave it better than you found it.

  • Understand the employee experience. This means understanding what the support team, engineering team, operations team, compliance and QA teams are going through. To do so, spending time talking to and shadowing your colleagues.

  • Understand the systems. This means which systems are responsible for each component, how they talk to each other, how they work, what data they need. To do so, talk to an engineering lead and have her whiteboard.

  • Understand 3rd party dependencies. This means any 3rd party software or data being used by your product. Engineering leads are a great resource here.

  • Understand analytical model. This means knowing all data that is generated by the system, how they are combined into different models, the definition of each field and how together they paint a full picture of what is going on. Typically the data or analyst teams are the most knowledgeable.

2. Measure: How do I know if my product is doing what it’s intended to do?


Now that you understand what your product in intended to do, you need to be able to monitor whether or not it is doing what it is intended to do. There are several ways of doing this:

  • Metrics

  • Controls

  • Testing

  • Sessions

  • Feedback

Metrics

Metrics enable you to get a sense of how your product is being used. Typically, PMs are focused on defining, implementing and tracking business metrics. Business metrics can be split into two types:

  • Output: These are the actions a user takes on your site (e.g. # of clicks, # of people who balk, etc.)

  • Outcome: These are the impact that these actions have on your business (e.g. revenue, leads, profitability, etc.)

Metrics can be implemented at various levels:

  • Analytics data→ based on the events that fire from users using your product. Good tools: Mixpanel, Amplitude, Google Analytics

  • System data → based on the data that is stored in your system. Good tools: Looker, Chartio, Tableau

  • System logs → based on the logs that your system generates. Good tools: Sumologic, Papertrail

Controls

Controls are checks that run against a system to ensure the system is performing as intended. These alerts get triggered whenever a metric passes a specific threshold. That alert should have:

  • A distribution list to notify the right peopleA description of the errorA runbook to follow to debug and resolve the error

  • Each metric tool has capabilities to implement controls, including Chartio that came out recently with their ‘Alert’ feature.

Testing

Testing is critical to ensuring that your product is doing what is intended. There are different types of tests, summarized below:

  • Test that run before changes are deployed. There are many types of tests such as unit tests, functional tests, integration tests, smoke tests, etc. At a high level, a PM should be aware of all the functional tests running on the product. These tests describe the behavior intended. Behavioral Driven Design (BDD) tests are common here: “given X when Y then Z”

  • Tests that run periodically after changes are deployed. These tests are typically more manual in nature and require sampling to ensure that everything matches expectations. As issues are caught, additional tests are added pre-deployment to catch the issue upstream.

Sessions

Nothing beats seeing how your product is behaving like getting a first hand experience. This can be accomplished by:

  • Dog fooding → be a super user of your own product to build empathy and understanding

  • Shadowing → to get a feel for how others are using your product, make sure you set out time to shadow users. This means asking them for permission for you to sit down next to them as they use the software and ask them questions, etc.

  • Recording → worst case, make sure you have access to recordings of users using your software. This can be done by tools such as Fullstory

Feedback

Of course, “straight from the horses mouth” is a great way to know if your product is working as intended. To do so, you need a strong user feedback culture. There are several ways of going about this:

  • Complaints program — every company should have a formal process where customer feedback is categorized and complaints are reported out.

  • CSAT or NPS scores— these can be either periodic or in experience (e.g. did you enjoy the call? leave us a rating).

  • Surveys — as necessary, use surveys to deepen your understanding.

  • Ratings (e.g. app store, review websites, in home rating system) — make sure you have a process to review these.

  • Focus groups — don’t hesitate to pull together customers to give you feedback on your product.

3. Assess: How well is my product doing what it is intended to do?

Once you have monitoring in place, you need a way to assess how your product is doing. In a sense this is the difference between what you understand and what you have measured.

Typically assessment falls into 3 main buckets:

  • Is my product adding value to my business? → this is tracked by business metrics, etc.

  • Is my product being used as intended? → this is tracked by business metrics, recordings / shadowing, etc.

  • Is my product working as intended? → this is tracked by testing, user feedback, controls, etc.

  • A best practice here is to use the concept of a scorecard in which on a periodic basis this assessment is collected and reported to the rest of the product and business teams. I wrote another post on KPIs here.

4. Act: How well do I respond when my product is not doing what it is intended to do?

Issues will happen. The difference between a good PM and a bad PM is the ability to identify and respond.

  • Awareness — am I even aware that there is an issue? Am I made aware within a reasonable timeframe, via a reasonable channel? For example, good awareness means being alerted within minutes via an automated alert vs. bad awareness where you are made aware days later by a customer calling in.

  • Resolution — once aware, am I able to understand the issue, triage it, and resolve it within appropriate SLAs given the severity of the issue?

  • Retrospection — once the issue is behind us, how is the product or process improved so as to ensure the issue does not happen again?

Thanks for reading, feedback welcome!

Comments


bottom of page