why experiments metrics result differ from the same metric funnel results?
Hi Elena M., Please stand by as I search the Mixpanel knowledge base for an AI-powered answer. I normally take about 15 seconds or so. In case AI cannot answer your question, feel free to reach out to Mixpanel support at mixpanel.com/get-support. If you have follow-up questions for this thread, please tag me again with your question: @kapa.ai <follow-up question> 👋
Feel free to ask a follow-up question once I've answered by mentioning me with @kapa.ai
Experiment metrics and Funnel metrics can differ because they’re calculated in different ways and often over different populations.Key reasons from the docs: 1. Population / attribution differences
- Experiments only consider users after they fire the exposure event ($experiment_started) and attribute all later events to the variant they saw. [Analysis engine]
- Funnels simply look at whoever did the funnel steps in the date range, regardless of experiment exposure.2. Metric definition & normalization
- In Experiments, “group rate” is always normalized by exposed users, not by converters or raw event counts. [Read lift]
- Funnels can be configured as uniques, total conversions, sessions, etc., and “totals” in Funnels are total conversions, not total events. [Funnels FAQ]3. Statistical model & variance
- Experiments use specific statistical models (Poisson/Bernoulli/normal) and compute p‑values, confidence intervals, and lift based on those assumptions. [Interpret results; Read lift]
- Funnels don’t apply this experiment-specific normalization and variance modeling; they just compute the funnel metric directly.4. Counting logic & re-entry
- Funnels have their own rules for re-entry, conversion windows, optimized re-entry, and how multiple attempts are counted. [Optimized re-entry; Funnels FAQ]
- Experiments look at all behavior after exposure (up to 90 days) and attribute it to the variant, independent of funnel re-entry rules. [Analysis engine]5. Query / filter differences
- As with other report discrepancies, even small differences in filters, properties (event vs user), time ranges, or aggregations (uniques vs totals) will cause mismatches. [Data discrepancies]If you want them closer, align: - Same user set (e.g., filter Funnels to users who fired $experiment_started for that experiment/variant).
- Same metric definition (conversion vs total events, uniques vs totals).
- Same time range and filters.
kapa.ai is still learning and improving, please let me know how I did by reacting below.
ℹ️ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
