How to send raw data from mixpanel to data warehouse using Growth plan?
Hi Yanisa S., Please stand by as I search the Mixpanel knowledge base for an AI-powered answer. I normally take about 15 seconds or so. In case AI cannot answer your question, feel free to reach out to Mixpanel support at mixpanel.com/get-support. If you have follow-up questions for this thread, please tag me again with your question: @kapa.ai <follow-up question> 👋
Feel free to ask a follow-up question once I've answered by mentioning me with @kapa.ai
On a Growth plan, there are two main ways to send “raw” Mixpanel data to your own environment:
1. Use Data Pipelines (paid add‑on)
With the Data Pipelines add‑on, Mixpanel continuously exports data from your project to either: - Cloud storage (e.g., S3, GCS, Azure Blob), or
- Data warehouses (BigQuery, Redshift Spectrum, Snowflake). This is available as an add‑on for Growth and Enterprise plans. [Data Pipelines overview; FAQ exporting]a. Choose destination type- Cloud storage (exports JSON files): S3, GCS, Azure Blob. [Data Pipelines overview]
- Data warehouse: BigQuery, Snowflake, Redshift Spectrum. [Integrations list]b. Configure destination permissionsExamples: - BigQuery:
- Create a dataset.
- Grant export-upload@[mixpanel-prod-1.iam.gserviceaccount.com](http://mixpanel-prod-1.iam.gserviceaccount.com) roles BigQuery Job User and BigQuery Data Owner on that dataset. [BigQuery]- Snowflake (JSON pipelines):
- Create a role (e.g., MIXPANEL_EXPORT_ROLE) and grant DB/schema/warehouse permissions.
- Create a GCS storage integration pointing to [gcs://mixpanel-export-pipelines](gcs://mixpanel-export-pipelines)-<project-id> .
- Create a user (password or key‑pair) and grant the role to that user. [Snowflake]- S3 / GCS / Azure Blob:
- Give Mixpanel’s service account write access to your bucket/container (see the respective integration guides). [Data Pipelines overview]c. Create the pipeline in Mixpanel UI1. In your Mixpanel project, go to Integrations.
2. Click Create Pipeline.
3. Select the destination (e.g., BigQuery or Snowflake) and data source (events / people / identity).
4. Fill in required config (project / dataset / region / credentials, etc.). [Data Pipelines overview; BigQuery]JSON pipelines keep raw‑like data under a properties JSON column, plus standard columns such as distinct_id, event_name, time, etc. [Json Pipelines]> Note: You cannot create multiple event pipelines that write to the same destination with overlapping date ranges. [Json Pipelines]
2. Use Export APIs (no add‑on cost, but not continuous sync)
If you don’t (or can’t) purchase the Data Pipelines add‑on, you can export raw data yourself and load it into your warehouse: - Events: [Raw Data Export API] - User profiles: [Engage API] Both are free to use on all plans. [FAQ exporting]You would then: 1. Script regular API pulls (e.g., daily) from Mixpanel. 2. Land the JSON into your own storage. 3. Load it into your warehouse using your ETL tooling.---If you clarify your target destination (S3, BigQuery, Snowflake, etc.), I can point to the exact steps and permission snippets for that one.
kapa.ai is still learning and improving, please let me know how I did by reacting below.
ℹ️ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
