Documentation

Design phase 6: measurement and experiments

mdcraft.ai Phase 6 — Measurement and Experimentation System#

Objective#

Establish a practical analytics and experimentation framework that guides product and design iteration toward growth and profit.

Strategic alignment#

This phase supports:

  • quality-first differentiation
  • activation speed from homepage/workbench
  • monetization through value-based upgrades

Measurement framework#

North-star outcome#

Increase the number of users who repeatedly create professional exports and convert to paid plans.

Funnel model#

  1. Visit
  2. Quick-start initiated
  3. First preview rendered
  4. First export completed
  5. Repeat export (7-day and 30-day)
  6. Upgrade started/completed

KPI stack#

Acquisition and activation#

  • visitor -> quick-start start rate
  • quick-start start -> preview rate
  • preview -> export completion rate
  • median time to first export

Retention#

  • 7-day repeat export rate
  • 30-day repeat export rate
  • exports per active user

Monetization#

  • free -> paid conversion rate
  • upgrade start -> upgrade completion rate
  • ARPU (as data matures)

Trust and quality health metrics#

  • export failure rate
  • reverse-flow warning resolution rate
  • support tickets per 1000 exports

Event taxonomy (implementation-ready)#

  • page_home_viewed
  • home_quickstart_module_viewed
  • home_upload_started
  • home_paste_started
  • home_quickstart_submitted
  • workbench_viewed
  • workbench_mode_changed
  • preview_render_success
  • export_pdf_clicked
  • export_pdf_success
  • export_pdf_failure
  • reverse_pdf_upload_started
  • reverse_warning_shown
  • reverse_quick_fix_applied
  • reverse_export_success
  • upgrade_prompt_viewed
  • upgrade_prompt_clicked
  • checkout_started
  • checkout_completed

Data quality rules#

  1. Define event owners and naming conventions before rollout.
  2. Track every event with timestamp, user/session id, plan tier, and mode.
  3. Validate event integrity in staging before production use.
  4. Keep one analytics glossary document and update on schema change.

Experimentation operating model#

Cadence#

  • biweekly experiment cycle
  • one primary hypothesis test at a time per funnel stage

Prioritization rubric#

Score each experiment by:

  • expected impact on KPI
  • confidence level
  • implementation effort
  • strategic alignment

Primary experiment queue#

Activation experiments#

  1. Homepage hero CTA copy
  2. Quick-start module default mode
  3. Upload zone helper text and trust micro-copy placement

Workbench experiments#

  1. Control panel default density (simple vs expanded)
  2. Export button placement and label
  3. Reverse beta warning copy clarity

Monetization experiments#

  1. Upgrade timing after successful export
  2. Lock-card copy variants for premium controls
  3. Pricing section order and “best for” framing

Experiment template#

  • hypothesis
  • KPI target
  • variant definition
  • segment scope
  • success/fail criteria
  • decision date
  • next action

Guardrails for safe experimentation#

  • never degrade export reliability for test variants
  • avoid tests that hide critical warnings in reverse beta
  • stop experiments that reduce first export completion materially

Dashboard blueprint#

Dashboard A: Executive#

  • weekly trend of visits, exports, paid conversions

Dashboard B: Activation#

  • funnel drop-off by step and segment

Dashboard C: Monetization#

  • prompt views, click-through, checkout completion

Dashboard D: Quality#

  • export failures and warning-heavy sessions

Phase rollout plan#

Step 1#

Instrument core activation/export events.

Step 2#

Ship first activation A/B tests.

Step 3#

Instrument upgrade funnel events and run monetization experiments.

Step 4#

Add cohort and retention tracking for longer-term optimization.

Acceptance criteria#

  1. KPI definitions are unambiguous and shared.
  2. Core funnel events are trackable end-to-end.
  3. Team can run at least one high-confidence experiment every two weeks.
  4. Decisions for homepage/workbench changes are based on measured outcomes.