What can you do with Whatagraph connected to your AI assistant?
Once you've connected Whatagraph to Claude or ChatGPT via MCP, your AI assistant can read your account data in real time: sources, metrics, report structure, account health, and more. Below are the most useful things you can do with it, along with the exact prompts to get you started.
Note: The MCP connection gives your AI assistant read access to your Whatagraph data. It does not create or modify reports, sources, or settings. Think of it as your assistant being able to look at your account alongside you.
Go further with ready-made skills
We provide a set of ready-made skills you can download from GitHub and upload to your AI platform under Customize → Skills. Once loaded, activate any skill by typing / and referencing it by name.
Skills don't unlock data that MCP can't already access. They change how the assistant works with that data. Each one either sequences multiple tool calls in a logical order, or brings structured interpretive knowledge so you don't have to piece the picture together yourself.
There are seven skills available, each covering a different workflow. Click on the arrow down to see the full list and explanations
There are seven skills available, each covering a different workflow. Click on the arrow down to see the full list and explanations
fetching-marketing-metrics- guides the assistant through pulling data correctly: identifying source IDs, selecting the right report type per channel, and using the right metric names. The assistant can figure this out through a discovery step on its own, but the skill skips that back-and-forth.generating-marketing-insights- turns raw numbers into a structured narrative. It checks your Whatagraph goals and overviews first, then produces an executive summary with wins, concerns, and specific next steps rather than a plain table.cross-channel-analytics- handles the complexity of comparing channels: metric naming differences across platforms, currency variations, and attribution overlap warnings. Structures output as a channel comparison table.analyzing-reports- adds a structured audit framework when reviewing reports: knows what structural issues to look for (missing overview tab, widget density, date context) rather than just returning raw structure data.auditing-account-health- sequences a full account audit across subscription limits, orphan sources, spaces without reports, reports without automations, and goals coverage - all in one pass rather than separate questions.exploring-account-data- navigates account structure in a logical order, combining integrations, sources, spaces, reports, and custom fields into a coherent picture.troubleshooting-data-issues- brings knowledge of the most common causes of data discrepancies and works through them in the right sequence: source errors first, then metric naming, then filters, then known platform-specific issues like timezone differences and attribution windows.
Run a full account health audit
Ask your assistant to check individual health signals: broken sources, sources that are not assigned to any spaces, spaces without reports, reports without automations.
Try these prompts:
"Are there any broken or disconnected sources right now?"
"Show me all sources that are failing and which space they belong to."
"Which of my sources aren't assigned to any space?"
"Do I have any reports that aren't set up with automated delivery?"
"Are there spaces with no reports in them?"
"How close am I to my source credit limit?"
Analyse cross-channel performance
Ask for a unified view of performance across all connected ad platforms, spend, conversions, ROAS, without building a report or opening each platform individually.
Try these prompts:
"How much have I spent across all channels in the last 30 days?"
"Give me a channel comparison: impressions, clicks, spend, and ROAS across Google Ads, Meta, and LinkedIn."
"What percentage of total spend went to each channel this month?"
"Which channel has the best ROAS right now?"
"Fetch data from my [Blend Name] blend and show me the combined performance."
"Which channels are improving and which are declining compared to last month?"
Load the cross-channel-analytics skill when you want the assistant to flag the gotchas automatically: metric names differ between platforms, currencies may vary per source, and summing conversions across channels risks attribution overlap. The skill warns about these rather than silently producing inflated totals, and structures the output as a proper channel comparison table.
Note: If you haven't set up blends yet, the skill will query each source individually and aggregate the results. Blends give cleaner, faster output for recurring cross-channel queries.
Get an executive summary before a client meeting
Instead of opening multiple tabs or scrolling through a report, ask for a plain-language summary of how a specific client's accounts are performing.
Try these prompts:
"Give me an executive summary of [Client Name]'s performance for the last 14 days."
"What were the top 5 campaigns by conversions for [Client Name] this month?"
"How did [Client Name]'s Meta campaigns perform compared to last month?"
"Summarize the key wins and issues across all sources in the [Space Name] space."
Use the generating-marketing-insights skill when you want the AI assistant to structure the response as an executive summary. Headline metric first, then wins, areas of concern, and 2–3 specific recommended actions. It also checks whether you've set goals in Whatagraph and frames performance against those targets rather than in isolation.
Without the skill, you'll get a summary, but it might be less structured and won't reference your existing goals or overviews automatically.
Tip: If you name your Spaces after clients, your assistant can scope requests to that space directly — no need to list individual source names.
Identify your best and worst-performing campaigns
Use your assistant to surface what's working and what isn't across all connected sources, without manually filtering or sorting in each platform.
Try these prompts:
"Which campaigns have the lowest ROAS across all my sources this month?"
"Show me the top 10 campaigns by impressions across Google Ads and Meta for the last 7 days."
"Which ad sets have a CTR below 1% right now?"
"Find campaigns where spend has increased but conversions have dropped compared to last month."
Run a period-over-period performance analysis
Ask your assistant to compare this period against the last, useful for weekly check-ins, end-of-month reviews, or any time you need to explain a trend quickly.
Try these prompts:
"How did overall performance this month compare to last month across all paid channels?"
"Is my CTR improving or declining over the last 8 weeks?"
"Which channels improved most between Q1 and Q4?"
"Show me a week-over-week comparison of sessions and conversions from GA4."
Tip: Load the generating-marketing-insights skill when you want those numbers framed as a narrative, flagging what moved, by how much, and what it implies, rather than a plain table to interpret yourself.
Check goal progress mid-period
If you've set Goals in Whatagraph, your assistant can pull them and tell you where you stand, without opening the platform.
Try these prompts:
"How are we tracking against our goals this month?"
"Are we on pace to hit our conversion target for this quarter?"
"Which goals are at risk right now based on current performance?"
Track budget pacing
Ask your assistant to calculate where you are mid-month and whether you're on track, without logging into each ad platform.
Try these prompts:
"How much have I spent on Google Ads so far this month, and what's the daily average?"
"If my monthly budget for Meta is €5,000, how much should I have spent by today?"
"Which sources are pacing ahead of budget this month based on current spend?"
Note: The assistant calculates pacing from live spend data. For goal-line tracking against a specific budget target, set up Goals directly in Whatagraph, and the connector can then read those targets alongside current spend.
Audit a report before it goes to a client
You can check report structure, automations, sharing settings, and export widget data directly.
Try these prompts:
"Is [Report Name] set up correctly?"
"What tabs does [Report Name] have, and what's on each one?"
"Are there any sources in [Report Name] with missing or zero data?"
"Does [Report Name] have automated delivery set up?"
"Who is [Report Name] currently shared with?"
"Show me the widget configuration on the overview tab of [Report Name]."
"Export the data from the summary widget in [Report Name] so I can check the numbers."
"Which reports don't have any automation scheduled?"
Load the analyzing-reports skill when you want the assistant to go beyond returning raw structure and flag what's actually wrong: reports missing a KPI overview tab, tabs with too many widgets, widgets without date context. These are the kinds of issues you'd only spot with experience.
Troubleshoot a data discrepancy
When numbers in Whatagraph don't match what a client sees in their ad platform, you can check source errors, widget filter configurations, and source group sync problems directly.
Try these prompts:
"My Google Ads spend in Whatagraph is lower than in Google Ads Manager, why?"
"The conversions on [Report Name] look wrong. Can you check the widget configuration?"
"Are there any filters applied to [Report Name] that might be excluding data?"
"One of my blends is showing inflated numbers. Can you check the source mapping?"
"Which sources in [Source Group Name] have sync issues?"
"Show me a day-by-day breakdown of [metric] for the last two weeks so I can spot where it diverges."
Load the troubleshooting-data-issues skill when you want the assistant to work through these in a structured sequence rather than respond to each question individually. It brings knowledge of the most common causes, timezone differences, attribution window mismatches, sync delays in source groups, metric naming differences across platforms, and checks for them in the right order.
Note: The assistant can identify the cause and explain it, but it can't make changes. Fixes such as reconnecting a source, adjusting a filter, and updating an attribution window need to be made in Whatagraph directly.
Explore an unfamiliar account
If you're taking over an account from a colleague or auditing a client setup for the first time, ask your assistant to map out what's there.
Try these prompts:
"What channels are connected in this account?"
"Give me an overview of how this account is structured — spaces, reports, sources."
"What custom metrics and dimensions have been set up?"
"Which sources are being used in reports and which aren't?"
"Search for anything related to [client name] across reports, spaces, and overviews."
Set up a new client workspace
When onboarding a new client, confirm what's connected, what's missing, and whether the structure is consistent with your other clients.
Try these prompts:
"What sources are currently connected in the [Space Name] space?"
"Does [Client Name]'s space have a Google Ads and GA4 source connected?"
"Compare the source setup in [Space A] and [Space B] — are they consistent?"
"Which of my spaces don't have any reports yet?"
"Which spaces don't have any goals set?"
What the MCP connector can and can't do
Can do | Can't do |
Read source names, statuses, and connected accounts | Create, edit, or delete reports |
Pull live metric data from connected sources | Modify source connections or settings |
Check report structure, tabs, widgets, sharing, and automations | Access data from sources that aren't connected in Whatagraph |
Export widget data as CSV for verification | Push data back into Whatagraph |
Read goals, overviews, blends, and source groups | Make changes when a discrepancy is found — fixes must be done in Whatagraph |
Diagnose source errors, filter issues, and sync problems |
|
Search across reports, spaces, and overviews by name |
|
If you haven't connected Whatagraph to your AI assistant yet, see Connect Whatagraph to your AI assistant with MCP.
