can Experiments UI models work for this two-stage process test where we want to measure the lift in Revenue per user Exposed to the AB test? The first test is whether a users converts (binary test). Then, out of those who convert, we want to test if one group spends more (gaussian test).
Hi Elena M., Please stand by as I search the Mixpanel knowledge base for an AI-powered answer. I normally take about 15 seconds or so. In case AI cannot answer your question, feel free to reach out to Mixpanel support at mixpanel.com/get-support. If you have follow-up questions for this thread, please tag me again with your question: @kapa.ai <follow-up question> 👋
Feel free to ask a follow-up question once I've answered by mentioning me with @kapa.ai
Mixpanel’s Experiments report can handle what you’re trying to measure, but not as a literal “two-stage” model in the UI; instead, you express it as metrics that already incorporate both stages.From the docs: - Metrics in Experiments are classified as count, rate, or value and are analyzed with appropriate distributions (Poisson, Bernoulli, normal). [Interpret results]
- Value metrics (like averages and sums) are treated with a normal approximation, and custom formula metrics such as Revenue per User = Total Revenue ÷ Unique Users are explicitly supported; Mixpanel uses propagation of uncertainty to estimate variance and significance for these. [How to read lift]For your case: 1. Stage 1 (conversion)
- Define a rate metric for conversion (e.g., “Checkout completion rate”). This is a Bernoulli / rate metric in Experiments. [Interpret results]2. Stage 2 (spend among converters)
- Define a value metric scoped to converters (e.g., average order value, or total revenue per converter). This is treated as a normal/value metric. [Interpret results]3. Overall “Revenue per user exposed”
- Define a custom formula metric like Revenue per User Exposed = Total Revenue ÷ Users Exposed. Mixpanel explicitly supports this pattern and will compute lift and significance using propagation of uncertainty. [How to read lift]What the UI does not expose (per the docs) is a special “two-stage” test type where you first test conversion and then conditionally test spend as a separate statistical model. Instead, you: - Track both metrics (conversion rate and revenue metrics) in the same experiment as primary / secondary / guardrail metrics. [References]
- Let Experiments compute lift, p‑values, and confidence intervals for each metric independently. [Interpret results]So: you can measure lift in “Revenue per user exposed to the A/B test” and also separately see conversion and post-conversion spend, but you’ll be doing this via multiple metrics (including a formula metric), not via a dedicated two-stage modeling option in the Experiments UI.
kapa.ai is still learning and improving, please let me know how I did by reacting below.
ℹ️ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
