Product Analytics Project Outline

Prepared by: Robert Tseng
Date: 10/20/2025
Audience: Alicia, Phoebe, Ashley

These analytics initiatives are designed to equip the Product org with a single, trustworthy system of insight for ReadMe’s self-serve funnel. Our goal is to help the Head of Product quickly answer:

  1. What features drive activation and revenue?
  2. Which pricing or trial experiments are working?
  3. How do behaviors and cohorts evolve over time?

Across all workstreams (Funnel, Feature, Infrastructure, Retention), success means reducing time-to-insight and improving decision velocity — moving from manual data checks to reliable, experiment-ready dashboards that inform product and pricing decisions within days, not weeks.

Conversion Funnel Optimization

Context

ReadMe is rolling out a set of pricing and packaging changes over the next several weeks. These changes primarily involve feature adjustments across tiers, new feature launches, and the reintroduction of a free trial (which we used to offer but had sunsetted in November 2024).

While the long-term strategy may involve consolidating from 4-to-3 paid plans, for now we are keeping four plans and adding in a trial in the coming weeks. You can see the latest pricing and packaging updates reflected here: https://readme.com/pricing

We need directional metrics to determine whether these pricing/packaging changes positively or negatively impact conversion.

P0 - Evaluate Feature Usage Impact on PLG Conversion

By Driving Actions…

  • Pushing users towards high-conversion signal features

What do I want to know?

  • What is the current conversion at each step of the funnel?
    • QA conversions by funnel step: # signups, # paid plan purchases - TICKET
  • How does this vary by plan type (Free, Startup, Business, Enterprise)? Trial vs. non-trial?
    • Build taxonomy for trial plan paid user categories - TICKET
      • May have something like this in Mongo
    • Build user segments based on activity - TICKET
      • First project created,
  • Can we break out the funnel by different product features (ie. Git Bidi)?
    • What are the most commonly used features that result in better conversion? - TICKET
      • Can currently track button presses/form submissions, but not really full workflows around feature sets
        • i.e. AI Feature set: open the AI chat enter X number of queries to get to the answer you want what actions are users taking?
        • What are users actually doing with these features? Mix of qual/quant because we can use session recordings + tracked event data
    • A/B testing playbook for product changes - TICKET
      • We have a playbook that works for this use case
    • Ideally we track every product field in Amplitude so we can filter accordingly

Why do I want to know it?

  • Current conversion at each step where the biggest drop-offs are
  • Conversion segmented by plan type figure out differences between paid vs. unpaid behavior
  • Conversion segmented by feature usage figure out which features lead to better conversion identify which features aren’t being tracked

So what?

  • Are we seeing any shifts in conversion after introducing the trial and other pricing updates?
    • Not sure when exactly the trial and pricing updates were introduced
  • Is self-serve conversion improving? Is upsell conversion improving via AI booster pack?
    • Haven’t seen the AI booster pack flow; unsure if that changes pricing flow
  • What percentage of trial users convert to paid vs. downgrade after the 14-day period?
    • Don’t have downgrade being tracked currently; does “Subscription Success” capture a downgrade?
    • Would be great to have a staging CC to test pricing events

Measured by?

  1. The key funnel stages we want to track are:
  • Sign Up: user enters name + email
  • Creates Project (Activation): user creates a project and enters the trial period
  • Launches Project (Paid): user becomes a paying customer and exits the trial
  • Downgrades from Trial: users downgrades after the 14-day trial period

Feature & Pricing Experimentation

Context

ReadMe is actively iterating on its feature adoption and pricing strategy to strengthen its product-led growth (PLG) motion and improve conversion from self-serve users. The near-term goal is to better understand how feature usage - especially of AI-powered tools like Docs Agent and Style Guide AI - correlates with activation, conversion, and retention.

We are simultaneously preparing a set of pricing and packaging experiments that include updated plan features, usage limit adjustments, and the reintroduction of a free trial (previously sunset in November 2024).

This workstream will provide directional metrics to evaluate whether these pricing and feature changes are improving PLG conversion efficiency, user activation, and revenue per active account.

P1 - AI Feature Usage Exploration

By Driving Actions…

What do I want to know?

  • Are AI features driving users to higher $?
  • What’s a standardized way of evaluating the engagement from a new feature set?
  • Does early engagement with AI or multi-feature usage predict higher conversion or retention?
  • What point in the user journey should trigger an upgrade prompt or paywall?

Why do I want to know it?

Measured by?

User Behavior Data Infrastructure

Context

ReadMe is continuing to build on its product analytics foundation through Amplitude Tracking V2, a top-priority (“P1”) initiative to make Amplitude the reliable, single source of truth for user behavior, experimentation, and revenue insights.

Currently, MongoDB and AWS handle transactional and version data, but nearly all behavioral data comes from Amplitude’s client-side autotracker. This limits accuracy for key conversion and monetization events - especially those that must be logged server-side (e.g., payments, signups, subscription success). As ReadMe scales its PLG motion and experiments with new pricing, trials, and AI feature adoption, we need a more trustworthy, experiment-ready data layer that accurately reflects what users do and how those actions tie to revenue.

The goal of this project is to rebuild Amplitude tracking to capture high-fidelity, server-validated events, enabling the Head of Product and analytics stakeholders to understand how feature engagement drives conversion, retention, and plan upgrades.

P1 - Amplitude Tracking V2

By Driving Actions…

What do I want to know?

  • Select milestone events that need to move from client-side to server-side - TICKET
    • Payment events, signups, and high-fidelity event that needs to be 100% correct

Why do I want to know it?

  • Reliant on Ampltiude autotracker
  • Not sending server-side events into Amplitude
    • Opposite problem of Ellie; Ellie only send server events from Azure and Healthfully
    • ReadMe doesn’t send anything from Mongo or any other sources; it’s all just JS-auto tracker

Measured by?

  1. The key funnel stages we want to track are:
  • Sign Up: user enters name + email
  • Creates Project (Activation): user creates a project and enters the trial period
  • Launches Project (Paid): user becomes a paying customer and exits the trial
  • Downgrades from Trial: users downgrades after the 14-day trial period

User Activation & Retention

Context

ReadMe is investing in User Activation & Retention to better understand what drives users from initial signup to long-term engagement and conversion within its self-serve PLG motion. While much of the current focus has been on data reliability and pricing experimentation, this workstream is about diagnosing how user behaviors translate to value realization, conversion, and retention - and which actions or combinations of actions are the best predictors of success.

Today, we know users sign up, create projects, and sometimes interact with features like Guides, Reference, Recipes, and the new AI tools, but we don’t yet have a clear picture of:

  • Which actions truly indicate “activation”
  • How those early behaviors correlate with upgrade likelihood or retention
  • How long users typically take to convert
  • What differentiates power users from casual testers

By answering these questions, we can prioritize onboarding, feature prompts, and product-led upsell paths that increase trial-to-paid conversion and retention. This work will also clarify what an “activated” ReadMe user really looks like.

Product Usage to Conversion Impact Analysis

By Driving Actions…

  • Identify feature combinations that have the highest correlation with paid conversion

What do I want to know?

  • What is an “activated” user in ReadMe?
    • Do they use References and then Guides?
      • Low Volume of Reference users
      • References usage = API Definition Form submission
      • New users need to have created a project in past 30D/90D
    • What other features do they need to use to convert?
  • What are the features that users use during their first few weeks?
    • References and Guide Usage doesn’t significantly drive more paid conversions
  • Do AI features drive more usage across the product?

Why do I want to know it?

  • Identify early behaviors that signal high purchase intent
  • Identify friction points in feature usage to further investigate with users

Measured by?

Feature Usage Buyer Signaling

By Driving Actions…

What do I want to know?

Why do I want to know it?

Measured by?

Retention Levers Deep Dive

By Driving Actions…

What do I want to know?

  • Select milestone events that need to move from client-side to server-side - TICKET
    • Payment events, signups, and high-fidelity event that needs to be 100% correct

Why do I want to know it?

  • Reliant on Ampltiude autotracker
  • Not sending server-side events into Amplitude
    • Opposite problem of Ellie; Ellie only send server events from Azure and Healthfully
    • ReadMe doesn’t send anything from Mongo or any other sources; it’s all just JS-auto tracker

Measured by?

  1. The key funnel stages we want to track are:
  • Sign Up: user enters name + email
  • Creates Project (Activation): user creates a project and enters the trial period
  • Launches Project (Paid): user becomes a paying customer and exits the trial
  • Downgrades from Trial: users downgrades after the 14-day trial period

ARCHIVE

Action Items / Acceptance Criteria

•	Data Trust & Event Semantics

•	Why do Amplitude and backend (Mongo/Stripe) numbers differ? Which events are server-side vs client-side today?

•	Is “Subscription Success” truly server-originated via Stripe/webhooks, and what is the canonical event we should use?

•	For any funnel step derived from multiple raw events (e.g., plan change → upgrade/downgrade), document the exact derivation.

•	Feature → Conversion (Activation Cocktails)

•	Do users who engage Reference and/or Guides within early windows (Day 1 / 7 / 30\) convert at higher rates? Are there converters who don’t use those core features—what are they using instead (e.g., Changelog, Recipes)?

•	Can we run a one-time “hacky” cohort cut of combinations (Guides, Reference, Recipes, Changelog, etc.) to identify top activation cocktails?

•	Revisit earlier OAS upload insight: does uploading an API definition (anywhere vs onboarding) increase conversion when measured over longer windows than Day 1?

•	AI Usage, Stickiness, and Pricing/Packaging

•	Who is using AI, how quickly do they first use it, and how often in first 1/3/7 days?

•	What % of users hit the current limit (5)? Provide usage distributions (p50/p90/p95) and simulate alternative limits.

•	What’s the failure/error rate for AI actions (e.g., agent/audit/linter), and do errors correlate with churn or non-conversion?

•	Are certain style-guide settings or response types associated with higher engagement or conversion?

•	Does early AI usage (first few days) predict higher conversion or retention vs non-AI users?

•	Customization & “Serious Buyer” Signals

•	Do users who configure branding/theme (colors, logo) convert at higher rates vs those who don’t?

•	Funnel Views & Time Series

•	Provide week-over-week conversion (time series) and cohort views (time-to-convert from Project Created).

•	Track two funnels separately: (a) overall try→convert self-serve funnel, (b) checkout flow optimization (Attempted Launch → Manage Plan View → Plan Change → Subscription Success).

•	Deliverables

•	A concise insights memo addressing the questions above (clear “what we know / don’t know,” charts, and recommended next steps).

•	A list of instrumentation or data model changes needed to answer any still-open questions.