Yes, this is possible in Mixpanel Insights, particularly when using a table chart or line chart with a breakdown applied.
Here's how it works:
Line Chart
When you have a line chart with a breakdown and a year-on-year comparison applied, the accompanying data table can be sorted by clicking column headers. You can click on a column (such as the comparison/lift column) to sort, and click again to reverse the order — allowing you to surface the segments with the biggest decline at the top. [Sorting Line Chart]
Table Chart
The table chart offers more flexible sorting options. You can:
Sort by value ascending or descending across any metric column
Use the "Compare to Past" feature to add a YoY comparison, then sort by that comparison column to find the biggest declines
If you enable "Percent Change over Baseline" (lift mode) in your time comparison, you'll see the percentage change for each segment, and you can then sort ascending by that lift value to show the largest YoY declines at the top. [Sorting; Comparisons]
Steps to achieve this:
Set up your Insights report with a breakdown
Click "Compare to Past" → select Previous Year
Optionally enable "Percent Change over Baseline" to see lift values
In the table, click the comparison/lift column header and sort Value Ascending to show the biggest declines first
ℹ️ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
There are several ways to check the properties of your custom event in Mixpanel:
1. Use the Events View (Live View)
The Events report lets you see events and all their properties in real time as they flow into your project. [Debugging guide]
Navigate to Data > Events in the left-hand navigation.
Search or filter by distinct_id, $user_id, $device_id, or a user property value to find your own events.
Click the > arrow to expand an event and inspect all its properties.
Use the Your Properties tab for custom properties and Mixpanel Properties for default ones.
Toggle JSON mode to view the complete raw JSON payload, which is especially useful for spotting data type inconsistencies.
2. Use Lexicon
Lexicon (Data Management > Lexicon) is Mixpanel's data dictionary and shows metadata for all your events and their properties, including display names, descriptions, example values, and status. [Lexicon overview]
Go to Event Properties in Lexicon to see which properties are associated with your events, their data types, and how frequently they appear.
3. Use the User Profile Activity Feed
The User Profile Activity Feed lets you inspect all events for a specific user, including their properties, in chronological order. [QA & data audit]
4. Use the Mixpanel Data Inspector (Browser Extension)
If you're testing a web implementation, the Mixpanel Data Inspector browser extension shows events and their full JSON payloads in real time as they're sent from your site. [Data Inspector]
Key things to verify when checking properties:
All expected properties are present.
Property values match what you intended.
Data types are correct (e.g., numeric values are not sent as strings).
Property names use the correct casing — Mixpanel is case-sensitive. [Debugging guide]
ℹ️ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
Great question! The tip about consolidating queries refers to using the Segmentation API with a where filter expression, so you can query multiple events or filter conditions in a single API call instead of making separate requests.
Here's a practical example using the Query API. Suppose you want to count events for both "Purchase" and "Add to Cart" in one query, filtered by a property (e.g., country == "US"):
Instead of making two separate API calls (one per event), you can use JQL to combine them into a single query:
function main() {
return Events({
from_date: params.start_date,
to_date: params.end_date,
event_selectors: [
{ event: "Purchase" },
{ event: "Add to Cart" }
]
})
.filter(function(e) {
return e.properties.country == "US";
})
.groupBy(["name"], mixpanel.reducer.count());
}
This single query:
Fetches both events at once via event_selectors
Filters them by the same property (country == "US")
Groups results by event name so you get a count per event
[JQL params example; groupBy docs]
Alternatively, if you're using the Segmentation REST API (not JQL), you can use the where parameter with an expression like:
where=properties["country"] == "US"
But note the Segmentation API only accepts a single event per call (event parameter), so for multiple events, JQL is the better approach to truly consolidate into one query. [Query API; rate limits]
ℹ️ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
The knowledge sources I have access to don't contain specific information about why you might see two mp_ cookies from the same domain in Mixpanel.
However, based on what the docs do mention, here are a couple of relevant configuration points that could be worth investigating:
Subdomain cookie behavior: By default, the Mixpanel JavaScript SDK sets a cookie that works across subdomains. If your setup involves subdomains, it's possible that separate cookies are being created per subdomain. [Mixpanel Cookie]
Multiple Mixpanel instances: If your page initializes Mixpanel more than once (e.g., with different project tokens or instance names), each instance would create its own mp_ cookie.
localStorage vs. cookie persistence: The SDK supports different persistence modes (cookie vs. localStorage), and mismatched configurations could potentially lead to unexpected behavior. [Mixpanel Cookie]
For a definitive answer on duplicate mp_ cookies, I'd recommend:
Checking if mixpanel.init() is being called more than once on your page with different configurations or instance names.
Reaching out to Mixpanel Support directly, as this specific scenario isn't covered in the knowledge sources I have available.
ℹ️ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
Sorry Mohamad Q., I don't have permission to access files in this workspace. Please ask a workspace admin to reinstall the kapa.ai bot to grant the required permission.
Funnels and Flows are complementary but distinct reports in Mixpanel:
Funnels
Funnels measure conversion through a predefined sequence of events within a set time window. You define the specific steps in order, and Mixpanel calculates how many users completed each step, where they dropped off, and how long conversion took. [Funnels Overview]
Key characteristics:
You specify the exact steps and their order upfront.
The focus is on conversion rates between steps.
Useful for answering questions like "What % of users completed my signup flow within 7 days?" or "At which step did most users drop off?"
Flows
Flows identify the most frequent paths users actually take to or from any event — without requiring you to predefine every step. It's an exploratory tool that reveals organic user behavior. [Flows Overview]
Key characteristics:
You anchor on one or two events and let Mixpanel surface what happened before or after.
The focus is on path discovery, not a fixed conversion sequence.
Useful for answering questions like "What did users do immediately after signing up?" or "What actions lead up to a key event?"
How They Work Together
The two reports are designed to complement each other. From within a Funnels report, you can click any step and select "View as Flow" to explore what dropped-off users did instead, or what paths converted users took — bridging structured conversion analysis with open-ended path exploration. [View as Flow]
In short: Funnels answer "how well are users converting through a known sequence?" while Flows answer "what paths are users actually taking?"
ℹ️ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
Even if your observed ratio appears to be 50/50, an SRM alert can still fire if the imbalance is detected at a statistically significant level — meaning even a small but consistent skew can trigger it.
The knowledge sources don't go into further detail about why an SRM alert might appear despite an apparently balanced split, but based on what's documented, here are the most likely explanations to investigate:
Exposure events not firing consistently across variants — if one variant's exposure event fires more reliably than the other, the recorded split may look balanced while the actual assignment is not. [Health Checks]
Users being reassigned mid-experiment — if some users switch variants, this can distort the analysis even if the overall count looks even. [Health Checks]
Bugs in bucketing logic — the assignment may appear balanced on the surface but have subtle issues affecting specific user segments. [SRM Detection]
The recommended course of action remains the same regardless: pause the experiment, investigate the root cause, and restart after fixing it. [Advanced Statistical Methods]
For a deeper diagnosis, I'd recommend checking whether your exposure event ($experiment_started) is firing at the right moment and consistently for both variants — not at assignment time, but at actual exposure. [Experiments]
ℹ️ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
An "imbalance alert" in the context of Mixpanel Experiments refers to a Sample Ratio Mismatch (SRM) — a health check that Mixpanel runs automatically on all experiments.
What triggers an SRM (Imbalance) alert?
You get this alert when the actual distribution of users across your variants doesn't match your configured split in a statistically significant way. For example, if you set up a 50/50 split but end up with 60% in control and 40% in treatment, Mixpanel flags this as a potential problem. [Health Checks]
Mixpanel uses a Chi-squared test to determine whether the observed imbalance is likely due to random chance or signals a real issue. [Health Checks]
Common causes
Bugs in your randomization or bucketing logic
Exposure events not firing consistently across variants
Users being reassigned to different variants mid-experiment
Differences in the start times of each variant
Technical issues like broken redirects [Health Checks; SRM Detection]
What should you do?
The recommended action is to:
Pause the experiment
Identify and fix the root cause of the mismatch
Restart the experiment after fixing it [Advanced Statistical Methods]
💡 As noted in the previous conversation, enabling CUPED can help mitigate the impact of pre-existing group bias, but it doesn't fix a broken randomization setup — that needs to be resolved at the source.
ℹ️ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
Great question. The knowledge sources don't directly address whether segmenting new vs. existing visitors replaces the need for CUPED, but here's what the documentation does tell us that's relevant:
What CUPED does:
CUPED reduces variance by accounting for how users were already behaving before the experiment. It helps isolate the true treatment effect by controlling for pre-existing behavioral differences between groups — not by segmenting users, but by adjusting the outcome measurement. [CUPED]
Where your segmentation approach helps:
Monitoring new and existing visitors separately is a sound practice — it avoids survivorship bias and surfaces different behavioral patterns. The documentation explicitly recommends using segmentation (e.g., new vs. returning users) to understand nuances in experiment results. [Reading Results]
Why CUPED may still be useful:
For your existing users segment specifically, CUPED would be most applicable — these users have pre-experiment history, which is exactly what CUPED leverages. [CUPED]
For new users, CUPED offers little benefit since they have no prior behavioral data (they'd be assigned a value of zero for the pre-exposure metric). [CUPED]
Bottom line: Segmenting new vs. existing visitors and using CUPED serve different purposes — they're complementary, not interchangeable. Segmentation helps you analyze subgroups correctly; CUPED helps you detect real effects faster within a group by reducing noise. The knowledge sources don't state that one eliminates the need for the other.
ℹ️ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
