Documentation
Design phase 6: measurement and experiments
mdcraft.ai Phase 6 — Measurement and Experimentation System#
Objective#
Establish a practical analytics and experimentation framework that guides product and design iteration toward growth and profit.
Strategic alignment#
This phase supports:
- quality-first differentiation
- activation speed from homepage/workbench
- monetization through value-based upgrades
Measurement framework#
North-star outcome#
Increase the number of users who repeatedly create professional exports and convert to paid plans.
Funnel model#
- Visit
- Quick-start initiated
- First preview rendered
- First export completed
- Repeat export (7-day and 30-day)
- Upgrade started/completed
KPI stack#
Acquisition and activation#
- visitor -> quick-start start rate
- quick-start start -> preview rate
- preview -> export completion rate
- median time to first export
Retention#
- 7-day repeat export rate
- 30-day repeat export rate
- exports per active user
Monetization#
- free -> paid conversion rate
- upgrade start -> upgrade completion rate
- ARPU (as data matures)
Trust and quality health metrics#
- export failure rate
- reverse-flow warning resolution rate
- support tickets per 1000 exports
Event taxonomy (implementation-ready)#
page_home_viewedhome_quickstart_module_viewedhome_upload_startedhome_paste_startedhome_quickstart_submittedworkbench_viewedworkbench_mode_changedpreview_render_successexport_pdf_clickedexport_pdf_successexport_pdf_failurereverse_pdf_upload_startedreverse_warning_shownreverse_quick_fix_appliedreverse_export_successupgrade_prompt_viewedupgrade_prompt_clickedcheckout_startedcheckout_completed
Data quality rules#
- Define event owners and naming conventions before rollout.
- Track every event with timestamp, user/session id, plan tier, and mode.
- Validate event integrity in staging before production use.
- Keep one analytics glossary document and update on schema change.
Experimentation operating model#
Cadence#
- biweekly experiment cycle
- one primary hypothesis test at a time per funnel stage
Prioritization rubric#
Score each experiment by:
- expected impact on KPI
- confidence level
- implementation effort
- strategic alignment
Primary experiment queue#
Activation experiments#
- Homepage hero CTA copy
- Quick-start module default mode
- Upload zone helper text and trust micro-copy placement
Workbench experiments#
- Control panel default density (simple vs expanded)
- Export button placement and label
- Reverse beta warning copy clarity
Monetization experiments#
- Upgrade timing after successful export
- Lock-card copy variants for premium controls
- Pricing section order and “best for” framing
Experiment template#
- hypothesis
- KPI target
- variant definition
- segment scope
- success/fail criteria
- decision date
- next action
Guardrails for safe experimentation#
- never degrade export reliability for test variants
- avoid tests that hide critical warnings in reverse beta
- stop experiments that reduce first export completion materially
Dashboard blueprint#
Dashboard A: Executive#
- weekly trend of visits, exports, paid conversions
Dashboard B: Activation#
- funnel drop-off by step and segment
Dashboard C: Monetization#
- prompt views, click-through, checkout completion
Dashboard D: Quality#
- export failures and warning-heavy sessions
Phase rollout plan#
Step 1#
Instrument core activation/export events.
Step 2#
Ship first activation A/B tests.
Step 3#
Instrument upgrade funnel events and run monetization experiments.
Step 4#
Add cohort and retention tracking for longer-term optimization.
Acceptance criteria#
- KPI definitions are unambiguous and shared.
- Core funnel events are trackable end-to-end.
- Team can run at least one high-confidence experiment every two weeks.
- Decisions for homepage/workbench changes are based on measured outcomes.