Is there any issue with Mixpanel MCP and Claude right now?
The knowledge sources do not mention any current or time-based outages or issues between Mixpanel MCP and Claude, so I canβt say whether there is an active problem βright now.β
What they do cover are common setup and usage issues you can check:
Make sure MCP is enabled by an org admin in Settings β Org β Overview in Mixpanel.[Getting started]
Confirm youβve configured Claude with the correct MCP server URL (US/EU/IN) and completed the Mixpanel OAuth authorization.[Implementation guide]
If youβre on Claude Free (desktop), ensure Node.js is installed and the mcp-remote command was run successfully with the proper scopes.[Implementation guide]
For generic MCP issues, the docs suggest: install Node if missing, verify Mixpanel permissions if authorization fails, and restart the desktop app after config changes.[MCP troubleshooting]
If youβre still seeing problems after these checks, the docs recommend contacting Mixpanel support via the help center.[Install troubleshooting]
βΉοΈΒ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
kapa.ai Here's our current retry implementation: Proxy Retry Logic (MixpanelProxyClient) // Constants private val RETRYABLE_STATUS_CODES = setOf(429, 502, 503, 504) private const val RATE_LIMIT_BODY_MARKER = "Too many requests" private const val MAX_RETRIES = 6 private const val INITIAL_DELAY_MS = 2_000L // Start at 2s private const val MAX_DELAY_MS = 60_000L // Cap at 60s private const val JITTER_RANGE_MS = 5_000L // 1-5s random jitter // Retry loop inside forward() for (attempt in 0..MAX_RETRIES) { if (attempt > 0) { // Exponential backoff: 2s β 4s β 8s β 16s β 32s β 60s (capped) + 1-5s jitter val exponentialDelay = (INITIAL_DELAY_MS * (1 shl (attempt - 1))).coerceAtMost(MAX_DELAY_MS) val jitter = Random.nextLong(1_000L, JITTER_RANGE_MS + 1) val delayMs = exponentialDelay + jitter delay(delayMs) } val response = try { // Forward to api.mixpanel.com httpClient.post(targetUrl) { ... } } catch (e: Exception) { // Network failure β synthetic 502, will be retried MixpanelProxyResponse(statusCode = HttpStatusCode.BadGateway, body = """{"error": "proxy_error"}""") } // Normalize: 504 with "Too many requests" body β 429 // (Mixpanel returns 429 but intermediary LB/proxy surfaces it as 504) val normalizedStatus = if (status != 429 && body.contains("Too many requests")) 429 else status // Never retry 400 β validation errors always fail and waste rate-limit quota if (response.statusCodeValue == 400) return response // Success or non-retryable error β return immediately if (response.statusCodeValue !in RETRYABLE_STATUS_CODES) return response // Otherwise: retryable (429/502/503/504) β loop continues with backoff } // All 6 retries exhausted β return last response (biz layer logs + Slack alerts) return lastResponse Backoff Sequence (per attempt) βββββββββββ¬βββββββββββββββ¬ββββββββββββββββββ¬ββββββββββββββ β Attempt β Base Delay β + Jitter (1-5s) β Total Range β βββββββββββΌβββββββββββββββΌββββββββββββββββββΌββββββββββββββ€ β 1 β 2s β 1-5s β 3-7s β βββββββββββΌβββββββββββββββΌββββββββββββββββββΌββββββββββββββ€ β 2 β 4s β 1-5s β 5-9s β βββββββββββΌβββββββββββββββΌββββββββββββββββββΌββββββββββββββ€ β 3 β 8s β 1-5s β 9-13s β βββββββββββΌβββββββββββββββΌββββββββββββββββββΌββββββββββββββ€ β 4 β 16s β 1-5s β 17-21s β βββββββββββΌβββββββββββββββΌββββββββββββββββββΌββββββββββββββ€ β 5 β 32s β 1-5s β 33-37s β βββββββββββΌβββββββββββββββΌββββββββββββββββββΌββββββββββββββ€ β 6 β 60s (capped) β 1-5s β 61-65s β βββββββββββ΄βββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββ Total retry window: ~2 minutes worst case before giving up. What We Handle - 429 / 502 / 503 / 504 β retry with exponential backoff - 504 with "Too many requests" body β normalize to 429, then retry (catches LB-masked rate limits) - 400 (validation error) β never retry, return immediately (per Mixpanel docs) - Network exceptions β treated as 502, retried with same backoff - Jitter β random 1-5s per attempt to desynchronize parallel requests
Your implementation is very close to Mixpanelβs recommended pattern and aligns well with the docs.
From the docs for /import and /track:
Mixpanel rate limit: 2GB uncompressed JSON/minute or ~30k events/sec, rolling 1 minute.[Import rate limits; Track limits]
Recommended strategy:
Send with concurrency until you see 429.
On 429, 502, 503: use exponential backoff with jitter, starting at 2s, doubling up to 60s, with 1β5s jitter.[Import rate limits; Track limits]
Do not retry 400 validation errors.[Import rate limits; GCS integration]
Retry 429 and 5xx with exponential backoff + jitter (same guidance appears in the GCS and Pub/Sub examples).[GCS integration; PubSub error handling]
How your logic compares:
Retryable codes: 429, 502, 503, 504 β matches docs (they explicitly call out 429, 502, 503; 504 is a reasonable extension and consistent with their guidance to treat 5xx as retryable).[Import rate limits]
Backoff: starts at 2s, doubles, capped at 60s, with 1β5s jitter β exactly what Mixpanel recommends.[Import rate limits]
400 handling: you immediately return and donβt retry β explicitly recommended.[Import rate limits; PubSub error handling]
Network exceptions β synthetic 502: consistent with βretry 5xx with same backoff.β[Import rate limits]
Normalizing 504 with βToo many requestsβ body to 429: not mentioned in the docs, but itβs compatible with their guidance (treating rate-limit responses as retryable with backoff). Thereβs no conflict with any documented behavior.
The docs donβt specify a maximum number of retries or total retry window; they just say to use exponential backoff up to 60s and keep retrying on 429/5xx.[Import rate limits] Your choice of 6 attempts (~2 minutes worst case) is a product decision and is not contradicted by the documentation.
So, based on the available docs, your retry strategy is fully aligned with Mixpanelβs recommendations, and thereβs nothing in the knowledge sources that suggests a problem with what youβre doing.
βΉοΈΒ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
The knowledge sources donβt specify any maximum total retry window (in minutes); they only prescribe how to retry, not how long:
On 429, 502, 503 (and by extension 5xx) use exponential backoff with jitter, starting at 2s, doubling up to 60s, with 1β5s jitter.[Import events; Track events]
Do not retry 400 validation errors.[Import events; PubSub errors]
They donβt say whether you should stop after 2 minutes, 10 minutes, or longer. That cutoff is left to your applicationβs requirements (latency SLOs, queue depth, user experience, etc.), and thereβs no guidance in the docs that would make a 10βminute retry window βunsafeβ from Mixpanelβs perspective as long as:
You respect the rate limit of ~30k events/sec or 2GB uncompressed JSON/minute.[Rate limits; Import events]
You only retry 429/5xx and never retry 400.[Import events]
So increasing your max retry window to ~10 minutes is a product/UX decision; the Mixpanel docs neither require nor forbid it.
βΉοΈΒ Tip: Mention @kapa.ai in the thread for follow-ups or DM the app for 1-to-1 help.
