id stringlengths 6 10 | category stringclasses 4 values | subcategory stringclasses 19 values | difficulty stringclasses 3 values | type stringclasses 2 values | question stringlengths 37 440 | option_a stringlengths 0 78 | option_b stringlengths 0 186 | option_c stringlengths 0 199 | option_d stringlengths 0 77 | answer stringclasses 5 values | answer_criteria stringlengths 2 1.2k | explanation stringlengths 108 397 | tags stringlengths 32 96 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
meta_001 | meta_ads | campaign_structure | easy | mcq | What is the correct hierarchy of a Meta Ads account structure? | Campaign → Ad Set → Ad | Ad Set → Campaign → Ad | Campaign → Ad → Ad Set | Ad → Ad Set → Campaign | A | [] | Meta Ads follows a three-level hierarchy: Campaign (objective) → Ad Set (targeting, budget, schedule) → Ad (creative). | ["campaign_structure", "fundamentals"] |
meta_002 | meta_ads | campaign_structure | medium | mcq | Advantage Campaign Budget (formerly CBO) distributes budget at which level? | Ad level | Ad Set level | Campaign level | Account level | C | [] | Advantage Campaign Budget (CBO) manages budget at the campaign level, dynamically allocating spend across ad sets based on performance opportunities. | ["cbo", "budget", "campaign_structure"] |
meta_003 | meta_ads | campaign_structure | medium | mcq | Which Meta campaign objective should you choose to drive purchases on your e-commerce website? | Awareness | Traffic | Engagement | Sales | D | [] | The Sales objective optimizes for conversion events like purchases. Traffic drives clicks but does not optimize for downstream conversion actions. | ["objectives", "ecommerce", "conversions"] |
meta_004 | meta_ads | campaign_structure | hard | mcq | An e-commerce brand has 5 product categories. What is the recommended campaign structure to maximize Meta's learning phase efficiency? | 1 campaign per product category, 1 ad set each, multiple ads | 1 consolidated campaign with CBO, 1 ad set per category, 3–5 ads each | 1 campaign per product, 10 ad sets with different audiences, 1 ad each | 5 campaigns per category with ABO, audience duplicated across all ad sets | B | [] | Consolidation with CBO allows Meta's algorithm to allocate budget efficiently. Fewer ad sets with more ads per set helps each ad set exit the learning phase faster by concentrating conversion data. | ["learning_phase", "cbo", "consolidation", "structure"] |
meta_005 | meta_ads | audience_targeting | easy | mcq | What is a Lookalike Audience on Meta? | An audience retargeted from your website visitors | A new audience that shares similar characteristics with your existing customers | An audience based on interest and demographic targeting | A saved audience used across multiple campaigns | B | [] | A Lookalike Audience uses a source audience (e.g., your customers) and finds new users on Meta who share similar behaviors and characteristics. | ["lookalike", "audience", "prospecting"] |
meta_006 | meta_ads | audience_targeting | medium | mcq | What minimum source audience size does Meta recommend for creating a high-quality Lookalike Audience? | 100 people | 500 people | 1,000 people | 10,000 people | C | [] | Meta recommends a source audience of at least 1,000 people for creating Lookalike Audiences, though larger, higher-quality sources (1,000–50,000) yield better results. | ["lookalike", "audience_size", "best_practices"] |
meta_007 | meta_ads | audience_targeting | medium | mcq | Advantage+ Audience (formerly broad targeting) is best used when: | You have no pixel data and are launching for the first time | You have strong conversion signals and want Meta to find the best audiences algorithmically | Your audience is very niche and requires strict demographic constraints | You are running awareness campaigns with no conversion tracking | B | [] | Advantage+ Audience works best when you have sufficient conversion data (pixel signals) allowing Meta's algorithm to identify high-value users without manual audience restrictions. | ["advantage_plus", "broad_targeting", "algorithm"] |
meta_008 | meta_ads | audience_targeting | hard | action_based | A DTC brand is seeing audience fatigue in its top-performing retargeting ad set (7-day website visitors). Frequency has reached 8.5 over 30 days, CTR dropped 40%, and CPA increased 65%. What actions should the performance marketer take? | ["Expand retargeting window to 14 or 30-day visitors to refresh the audience pool", "Introduce new creative variations immediately to combat creative fatigue", "Create exclusions for recent purchasers to reduce wasted spend", "Test broader retargeting audiences (e.g., video viewers, Instagram engagers)", "Consider reducing budget on this ad set and shifting to prospecting or LAL audiences", "Monitor frequency cap settings or set manual frequency caps"] | High frequency combined with declining CTR and rising CPA are classic signs of audience fatigue. The core fixes are creative refresh, audience expansion, and exclusions. | ["audience_fatigue", "frequency", "retargeting", "optimization"] | |||||
meta_009 | meta_ads | bidding_strategy | easy | mcq | Which Meta bid strategy gives the algorithm the most freedom to find conversions at the lowest possible cost? | Cost Cap | Bid Cap | Highest Volume (Lowest Cost) | ROAS Goal | C | [] | Highest Volume (Lowest Cost) lets Meta's algorithm spend the full budget on the maximum number of conversions without any cost constraint, giving it maximum flexibility. | ["bid_strategy", "lowest_cost", "algorithm"] |
meta_010 | meta_ads | bidding_strategy | medium | mcq | A brand wants to maintain an average CPA of $25 while scaling spend. Which bid strategy is most appropriate? | Bid Cap at $25 | Cost Cap at $25 | Highest Volume | ROAS Goal | B | [] | Cost Cap tells Meta to aim for an average CPA around your set amount, allowing some bids above and below to optimize delivery. Bid Cap hard-limits individual bids, which can restrict delivery too aggressively. | ["cost_cap", "cpa", "bid_strategy", "scaling"] |
meta_011 | meta_ads | bidding_strategy | hard | mcq | A campaign using Cost Cap at $30 CPA is under-delivering and spending only 40% of its daily budget. What is the most likely root cause? | The audience is too broad | The Cost Cap is set too low relative to the market, limiting auction eligibility | The campaign is in the learning phase | The ad creative has low relevance score | B | [] | When Cost Cap is set too aggressively low, Meta cannot find enough auction opportunities within that cost target, causing under-delivery. The fix is to gradually raise the Cost Cap or temporarily use Lowest Cost to understand true market CPA. | ["cost_cap", "under_delivery", "auction", "troubleshooting"] |
meta_012 | meta_ads | bidding_strategy | hard | action_based | You manage a Meta campaign for a subscription product with a $45 target CPA. Your current Lowest Cost campaign is delivering at $38 CPA with good ROAS. Leadership wants to scale budget 3x this week. What is your recommended approach? | ["Do not 3x budget overnight \u2014 increase by 20\u201330% increments every 2\u20133 days to avoid resetting learning phase", "Switch to or test Cost Cap at $45 to maintain CPA discipline at scale", "Duplicate top-performing ad sets to scale spend without disrupting existing ad set learning", "Monitor for CPA increase as scale increases \u2014 expect some efficiency loss", "Expand audiences (LAL, broader interests) to support higher spend without audience saturation", "Ensure pixel is firing correctly and conversion window is appropriate"] | Scaling too fast disrupts the learning phase. Gradual budget increases and audience expansion are critical to maintaining CPA efficiency while scaling. | ["scaling", "budget", "learning_phase", "cost_cap"] | |||||
meta_013 | meta_ads | creative_performance | medium | mcq | According to Meta best practices, what percentage of ad spend should be allocated to creative testing? | 5% | 10–20% | 50% | Creative testing should have its own separate budget outside the main account | B | [] | Meta recommends allocating 10–20% of your total budget to creative testing to continuously find new winning creatives and prevent fatigue in your main campaigns. | ["creative_testing", "budget_allocation", "best_practices"] |
meta_014 | meta_ads | creative_performance | medium | mcq | What does a high 'Hook Rate' (3-second video views / impressions) indicate about a video ad? | The video has strong overall watch time | The opening 3 seconds are compelling enough to stop the scroll | The video is driving high click-through rates | The video has low production cost | B | [] | Hook Rate measures how effectively the first 3 seconds capture attention (stop the scroll). A high hook rate means the opening is compelling, though it doesn't guarantee full video completion or clicks. | ["video_ads", "hook_rate", "creative_metrics", "attention"] |
meta_015 | meta_ads | creative_performance | hard | action_based | Your Meta video ad has a Hook Rate of 45% (excellent) but a Hold Rate (video watches at 25% / 3-second views) of only 18% (poor). CPC is high and CTR is low. What does this indicate and what should you do? | ["The opening 3 seconds are strong (high hook rate) but the middle content loses viewers quickly (low hold rate)", "The problem is the body/middle of the video, not the hook", "Action: Rework the content between 3\u201315 seconds to maintain interest and build on the hook", "Test a shorter video version that cuts to the value proposition faster", "Ensure the CTA and key message appear earlier in the video", "Consider if there is a mismatch between what the hook promises and what the video delivers"] | High hook rate + low hold rate = strong open, weak middle. The creative strategy should fix the narrative arc after the hook, not the opening. | ["video_ads", "hook_rate", "hold_rate", "creative_diagnosis"] | |||||
meta_016 | meta_ads | measurement | medium | mcq | What is the primary purpose of Meta's Conversions API (CAPI)? | To replace the Meta Pixel entirely | To send server-side conversion events to Meta to improve signal quality in a privacy-safe way | To track offline store visits | To automate ad creative generation | B | [] | CAPI sends conversion data directly from your server to Meta, bypassing browser-based limitations (ad blockers, iOS restrictions) and improving the quality and completeness of conversion signals. | ["capi", "server_side", "measurement", "ios14", "privacy"] |
meta_017 | meta_ads | measurement | hard | mcq | After iOS 14.5, Meta recommends which measurement approach as the primary attribution method? | 7-day click, 1-day view attribution | 28-day click attribution | Click-through attribution only | Last-touch attribution via Google Analytics | A | [] | Post-iOS 14.5, Meta defaulted to 7-day click + 1-day view as its primary attribution window. The 28-day window was deprecated. This window balances signal availability with reporting accuracy. | ["ios14", "attribution", "measurement", "privacy"] |
meta_018 | meta_ads | measurement | hard | action_based | A client reports that Meta Ads Manager shows 150 purchases but Shopify shows only 80 purchases attributed to Meta. The client wants to reduce Meta budget because 'the numbers don't add up.' How do you explain this and what is your recommendation? | ["Explain that Meta uses view-through and click attribution (7-day click + 1-day view by default), while Shopify uses last-click attribution", "View-through conversions count users who saw an ad but converted via another channel \u2014 Meta claims credit, Shopify attributes to the last click", "The true number likely lies between the two \u2014 neither is 'wrong', they measure different things", "Recommend running a Meta Conversion Lift Study to measure incrementality", "Consider using a third-party MMP or northstar metric approach to deconflict attribution", "Do not reduce budget based solely on this discrepancy \u2014 understand incrementality first"] | Attribution model differences between Meta and Shopify are the most common source of this discrepancy. Educating the client on this is critical before making budget decisions. | ["attribution", "shopify", "measurement_discrepancy", "incrementality"] | |||||
meta_019 | meta_ads | advantage_plus | medium | mcq | What is Meta Advantage+ Shopping Campaigns (ASC) designed to do? | Automate catalog-based dynamic ads only | Use AI to automatically manage targeting, placements, and creative combinations for shopping advertisers | Replace lead generation campaigns | Run ads exclusively on Instagram Shopping | B | [] | ASC is a fully automated campaign type where Meta's AI handles targeting (prospecting + retargeting together), placements, and creative optimization — purpose-built for e-commerce/shopping advertisers. | ["advantage_plus_shopping", "asc", "automation", "ecommerce"] |
meta_020 | meta_ads | advantage_plus | hard | action_based | A fashion e-commerce brand is currently running manual prospecting and retargeting campaigns separately on Meta, achieving ROAS of 3.2. They want to test Advantage+ Shopping Campaigns. How would you structure the test? | ["Run ASC as an A/B test against the existing manual setup \u2014 do not shut down existing campaigns immediately", "Give ASC sufficient budget (at least 20% of total Meta budget) and at least 2\u20134 weeks to exit learning phase", "Set existing customer budget cap in ASC (e.g., 20\u201330%) to control prospecting vs retargeting balance", "Upload all existing creatives (static, video, catalog) to ASC and let it optimize", "Compare on ROAS, CPA, and incremental revenue \u2014 not just in-platform metrics", "If ASC wins, gradually shift more budget to it while monitoring performance"] | Testing ASC requires proper A/B methodology — sufficient budget, time, and a clear success metric. Rushing the switch risks losing performance from the existing proven structure. | ["asc", "advantage_plus_shopping", "testing", "ecommerce"] | |||||
meta_021 | meta_ads | campaign_structure | medium | mcq | What is the Meta learning phase and when is an ad set considered to have 'exited' it? | A 7-day period where Meta tests different audiences; exits after 7 days | The period where Meta's delivery system optimizes; exits after ~50 optimization events in a week | A 30-day warm-up for new accounts; exits after 30 days | A testing period that only affects new ad accounts | B | [] | The learning phase is when Meta's delivery algorithm is gathering data to optimize delivery. An ad set exits learning once it has recorded approximately 50 optimization events (e.g., purchases) within a 7-day period. | ["learning_phase", "optimization", "fundamentals"] |
meta_022 | meta_ads | campaign_structure | hard | mcq | Which of the following actions will reset the learning phase of an ad set? | Changing the ad copy within an existing ad | Increasing the budget by 15% | Changing the bid strategy from Lowest Cost to Cost Cap | Adding a new placement | C | [] | Significant changes that reset the learning phase include changing bid strategy, changing the optimization event, significantly changing the budget (>20–30%), changing audience, or changing creative. Changing bid strategy is a fundamental reset. | ["learning_phase", "reset", "bid_strategy"] |
meta_023 | meta_ads | measurement | medium | mcq | What does Event Match Quality (EMQ) score measure in Meta Events Manager? | How relevant your ad creative is to your target audience | How effectively your customer data matches Meta user profiles, improving attribution accuracy | The quality score of your conversion events vs. competitors | How many events you have fired in the last 30 days | B | [] | EMQ measures how well the customer information (email, phone, etc.) sent with your conversion events matches Meta user profiles. Higher EMQ = better attribution and optimization signal. | ["emq", "capi", "measurement", "signal_quality"] |
meta_024 | meta_ads | audience_targeting | medium | mcq | What is the recommended approach for audience exclusions in a Meta prospecting campaign? | No exclusions needed — Meta's algorithm handles audience overlap automatically | Exclude existing customers and recent website visitors (e.g., 30–180 day) to ensure true prospecting | Exclude all lookalike audiences to prevent overlap | Exclude all mobile users to reduce CPC | B | [] | Excluding existing customers and recent site visitors from prospecting campaigns ensures budget is spent acquiring truly new users, not re-engaging warm audiences that should be in a dedicated retargeting campaign. | ["exclusions", "prospecting", "audience", "best_practices"] |
meta_025 | meta_ads | creative_performance | easy | mcq | What Meta ad format is typically recommended for showcasing multiple products from a catalog? | Single image ad | Story ad | Dynamic Product Ads (DPA) with Catalog | Collection ad only | C | [] | Dynamic Product Ads (DPA) automatically pull products from your catalog and show personalized product recommendations to users based on their browsing and purchase behavior. | ["dpa", "catalog", "dynamic_ads", "ecommerce"] |
meta_026 | meta_ads | bidding_strategy | medium | mcq | You are running a Meta campaign optimized for purchases. Your daily budget is $500 but you are only spending $150/day. Which of the following is NOT a likely cause? | Cost Cap set too low | Audience too small | Ad set still in learning phase | Too many ads in the ad set (ad fatigue) | D | [] | Under-delivery is typically caused by overly restrictive targeting, a cost cap that limits auction participation, being in the learning phase, or narrow audiences. Having many ads (ad fatigue) affects CPC/CTR over time but is not a primary cause of acute under-delivery. | ["under_delivery", "budget", "troubleshooting"] |
meta_027 | meta_ads | measurement | hard | action_based | A performance marketer notices that after implementing Conversions API (CAPI) alongside the Meta Pixel, reported purchase events doubled overnight. There are no new campaigns. What is happening and what should be done? | ["This is event deduplication issue \u2014 both Pixel and CAPI are firing the same purchase event, causing duplicate reporting", "Meta requires deduplication parameters: eventID must be identical between Pixel and CAPI events for the same purchase", "Check that eventID (or event_id) is being passed correctly and consistently in both Pixel and CAPI payloads", "Review Events Manager for duplication warnings", "Fix deduplication logic and verify in Events Manager that deduplication is working", "Do not interpret the doubled events as real performance improvement"] | CAPI + Pixel redundancy requires proper deduplication. Without matching event IDs, Meta counts both the browser event and the server event as separate conversions. | ["capi", "deduplication", "pixel", "measurement", "troubleshooting"] | |||||
meta_028 | meta_ads | audience_targeting | hard | mcq | A brand targeting 'health-conscious adults 25–45 in the US' finds that Advantage+ Audience significantly outperforms a manually defined interest-based audience at the same budget. What does this most likely indicate? | The interest-based targeting was wrong about who the customers are | Meta's algorithm has identified high-converting users outside the assumed demographic/interest constraints | The Advantage+ audience spent more budget, inflating results | Interest targeting no longer works on Meta post-iOS 14 | B | [] | Advantage+ Audience often outperforms manual targeting because Meta's algorithm finds converting users that don't match the marketer's assumed profile. This is a common and valuable insight — the algorithm's view of your customer may be broader or different than manual assumptions. | ["advantage_plus", "broad_targeting", "algorithm", "audience_insights"] |
meta_029 | meta_ads | creative_performance | medium | mcq | What is the recommended aspect ratio for Meta Reels ads to maximize screen real estate? | 1:1 (Square) | 16:9 (Landscape) | 9:16 (Vertical/Portrait) | 4:5 (Portrait) | C | [] | 9:16 vertical format fills the entire phone screen in Reels and Stories, maximizing immersion and ad visibility. It is the native format for these placements. | ["creative", "reels", "aspect_ratio", "placements"] |
meta_030 | meta_ads | measurement | hard | mcq | What is a Meta Conversion Lift Study and when should you use it? | A tool to measure creative quality; use it when CTR is low | A randomized controlled experiment that measures the incremental impact of your ads vs. a holdout group; use it to validate true ad-driven revenue | A report in Ads Manager showing conversion trends over time | A feature to track offline-to-online attribution | B | [] | Conversion Lift Studies use a randomized holdout (people who don't see your ads) to measure true incrementality — how many conversions happened because of your ads, not organic or other channels. | ["lift_study", "incrementality", "measurement", "holdout"] |
google_001 | google_ads | campaign_structure | easy | mcq | What is the correct hierarchy in a Google Ads account? | Campaign → Ad Group → Ad + Keywords | Ad Group → Campaign → Ad | Campaign → Ad → Ad Group | Account → Ad → Campaign → Ad Group | A | [] | Google Ads: Campaign (budget, bidding, network) → Ad Group (keywords, targeting) → Ads (creative) + Keywords (triggers). | ["campaign_structure", "fundamentals"] |
google_002 | google_ads | keyword_strategy | easy | mcq | Which Google Ads keyword match type gives the most control over which searches trigger your ad? | Broad Match | Phrase Match | Exact Match | Broad Match Modifier (deprecated) | C | [] | Exact Match [keyword] only triggers ads for searches that mean the same as your keyword (with close variants). It offers the tightest control over traffic quality. | ["keyword_match_types", "exact_match", "fundamentals"] |
google_003 | google_ads | keyword_strategy | medium | mcq | You are running a brand campaign with exact match keywords. Which of the following should you add to prevent wasted spend? | More broad match keywords | Negative keywords for competitor brand names and irrelevant queries | Dynamic Search Ads | Increase bids on all keywords | B | [] | Negative keywords prevent your ads from showing on irrelevant searches, protecting budget and improving click quality. Brand campaigns especially benefit from excluding competitor names and generic terms. | ["negative_keywords", "brand_campaign", "keyword_strategy"] |
google_004 | google_ads | quality_score | medium | mcq | Quality Score in Google Ads is determined by which three components? | Bid amount, ad copy, landing page load speed | Expected CTR, Ad Relevance, Landing Page Experience | Impression share, keyword bids, account history | Search volume, competition, ad rank | B | [] | Quality Score (1–10) is composed of: Expected CTR (likelihood your ad gets clicked), Ad Relevance (how closely your ad matches search intent), and Landing Page Experience (relevance and usability of the destination). | ["quality_score", "ad_rank", "fundamentals"] |
google_005 | google_ads | quality_score | hard | action_based | A Google Search campaign has average Quality Scores of 4/10 across core keywords. CPC is 40% higher than industry benchmarks. What diagnostic steps and actions would you take? | ["Audit Expected CTR: compare actual CTR to expected for each keyword; improve ad copy headlines to be more relevant to the keyword", "Audit Ad Relevance: ensure keyword appears in headline 1 and description; tighten ad groups by creating Single Keyword Ad Groups (SKAGs) or tightly themed groups", "Audit Landing Page Experience: ensure landing page content matches keyword intent, loads fast (<3s), is mobile-friendly, and has clear relevance to the ad", "Use Responsive Search Ads (RSAs) with keyword insertion to improve relevance signals", "Review Search Terms Report and add negative keywords to improve traffic quality and CTR", "Run Ad Strength improvements for RSAs"] | Low Quality Score = high CPCs. The three levers are CTR (creative), relevance (structure + copy), and landing page (UX/content). Address all three systematically. | ["quality_score", "cpc", "optimization", "ad_relevance"] | |||||
google_006 | google_ads | smart_bidding | easy | mcq | Which Google Smart Bidding strategy should you use when you want to maximize conversions within a fixed daily budget? | Target ROAS | Target CPA | Maximize Conversions | Enhanced CPC (ECPC) | C | [] | Maximize Conversions tells Google to get as many conversions as possible within your set budget, without a CPA target constraint. Ideal when you want volume and trust Google's algorithm. | ["smart_bidding", "maximize_conversions", "budget"] |
google_007 | google_ads | smart_bidding | medium | mcq | How many conversions per month does Google recommend having before switching to Target CPA bidding? | 5 | 15 | 30–50 | 100 | C | [] | Google recommends at least 30–50 conversions per month (at the campaign level) before using Target CPA, so the algorithm has sufficient data to optimize effectively. | ["target_cpa", "smart_bidding", "data_requirements", "conversion_volume"] |
google_008 | google_ads | smart_bidding | hard | mcq | A Target ROAS campaign is set to 400% but is spending only 30% of its daily budget. The actual ROAS over the last 30 days is 280%. What is the best course of action? | Increase the daily budget to force more spend | Lower the Target ROAS to be closer to historical performance (e.g., 300%) to unlock more traffic | Switch to manual CPC immediately | Pause the campaign for 2 weeks and restart | B | [] | A Target ROAS set too high vs. actual historical ROAS restricts auction participation, causing under-delivery. Lowering tROAS closer to achievable levels (gradual steps) allows more spend and traffic while maintaining efficiency goals. | ["target_roas", "under_delivery", "smart_bidding", "troubleshooting"] |
google_009 | google_ads | smart_bidding | hard | action_based | You are migrating a manual CPC campaign (running for 2 years, strong performance) to Target CPA smart bidding. Describe your migration approach to minimize performance risk. | ["Use 'Maximize Conversions' first for 2\u20134 weeks to allow the algorithm to gather signal before applying a CPA constraint", "Set Target CPA at 20\u201330% above current actual CPA to give the algorithm room to operate", "Do not make other major changes during the transition (budget, audiences, keywords)", "Use the 'Bid Strategy Report' to monitor performance during transition", "Evaluate performance over a 4\u20136 week period, not just the first few days", "Gradually tighten Target CPA toward your goal over time once stable"] | Abrupt switches to Smart Bidding risk performance drops. A phased approach (Maximize Conversions → Target CPA with a relaxed target) reduces risk and gives the algorithm time to learn. | ["smart_bidding_migration", "target_cpa", "transition", "risk_management"] | |||||
google_010 | google_ads | performance_max | medium | mcq | What does Performance Max (PMax) campaign type do? | Runs ads only on Google Search network | Uses AI to serve ads across all Google channels (Search, Display, YouTube, Shopping, Gmail, Maps) from a single campaign | Replaces Smart Shopping campaigns for physical store visits only | Optimizes exclusively for ROAS on Google Shopping | B | [] | Performance Max is Google's fully automated, cross-channel campaign type that serves across all Google inventory from one campaign, using AI to optimize toward your conversion goals. | ["performance_max", "pmax", "automation", "cross_channel"] |
google_011 | google_ads | performance_max | hard | action_based | A retail brand running Performance Max campaigns wants to understand where their budget is actually being spent across channels. PMax is a black box. What strategies can they use to gain more visibility and control? | ["Use the 'Asset Group Report' and 'Insights' tab in PMax to see audience segments and search themes", "Use Search Terms Report (now available for PMax) to see what searches are triggering ads", "Set up 'Brand Safety' exclusions and negative keyword lists at account level (applied to PMax)", "Use campaign-level audience signals to guide (not restrict) where the algorithm focuses", "Separate brand vs. non-brand by excluding brand keywords from PMax and running a separate brand search campaign", "Review 'Explanations' in the interface when performance fluctuates", "Use asset performance labels (Best, Good, Low) to iterate on creative"] | PMax provides limited native visibility but there are several tactics to improve transparency and control without breaking automation. | ["performance_max", "transparency", "control", "brand_search", "reporting"] | |||||
google_012 | google_ads | attribution | medium | mcq | What is the default attribution model in Google Ads (as of 2023)? | Last Click | First Click | Linear | Data-Driven Attribution (DDA) | D | [] | Google switched the default attribution model to Data-Driven Attribution (DDA), which uses machine learning to distribute conversion credit based on actual contribution of each touchpoint in the customer journey. | ["attribution", "data_driven", "dda", "measurement"] |
google_013 | google_ads | attribution | hard | mcq | A brand bidding on competitor keywords is getting high impressions and clicks but 0 conversions over 30 days. What should they do? | Increase bids on competitor keywords to improve position | Pause all competitor keywords and reallocate budget to branded and non-branded intent keywords | Analyze search terms, review landing page relevance, and consider whether competitor traffic has meaningful conversion intent before deciding to pause | Switch to broad match on competitor keywords | C | [] | Competitor keywords often have lower conversion intent. Before pausing, analyze: Are you landing them on a relevant comparison page? Is your offer competitive? Is the landing page optimized for this traffic? Only pause after a thorough review. | ["competitor_keywords", "conversion_rate", "diagnosis"] |
google_014 | google_ads | search_campaigns | medium | mcq | What is 'Impression Share Lost to Budget' in Google Ads? | The percentage of impressions lost because your ad quality is too low | The percentage of impressions your ads missed because your daily budget was exhausted | The share of impressions won by competitors | The number of impressions lost due to ad disapprovals | B | [] | Search Impression Share Lost to Budget tells you what % of eligible impressions you missed due to budget running out. If this is high, increasing budget should drive more traffic (assuming the existing CPA is acceptable). | ["impression_share", "budget", "search", "scaling"] |
google_015 | google_ads | search_campaigns | hard | action_based | A Google Search campaign has 45% Search Impression Share Lost to Rank and 5% Lost to Budget. CPA is within target. What does this tell you and what actions do you take? | ["45% lost to rank means many eligible searches are not showing your ad because Ad Rank is too low", "Ad Rank is determined by: bid \u00d7 Quality Score \u00d7 expected impact of extensions", "Action 1: Improve Quality Score \u2014 audit and improve Expected CTR, Ad Relevance, Landing Page Experience", "Action 2: Review bids \u2014 if Quality Score is already high, bid increases can improve rank", "Action 3: Audit ad extensions (sitelinks, callouts, structured snippets) \u2014 they directly impact Ad Rank", "Budget is not the constraint here (only 5% lost to budget), so increasing budget would not help", "Focus optimization efforts on rank factors, not budget"] | IS Lost to Rank means your ads are losing auctions on quality/bid factors, not budget. Improving Quality Score and extensions is more efficient than increasing bids. | ["impression_share", "ad_rank", "quality_score", "optimization"] | |||||
google_016 | google_ads | search_campaigns | medium | mcq | What is the purpose of Responsive Search Ads (RSAs) in Google Ads? | To show ads that automatically resize for mobile screens | To allow Google to test combinations of up to 15 headlines and 4 descriptions to find the best performing combinations | To serve dynamic ads based on website content | To replace all manual ads with AI-generated copy | B | [] | RSAs allow you to provide up to 15 headlines and 4 descriptions. Google's algorithm tests different combinations to find the best-performing assembly for each query and user context. | ["rsa", "responsive_search_ads", "creative", "testing"] |
google_017 | google_ads | keyword_strategy | hard | action_based | A software company's Google Search campaign is getting high spend on broad match keywords with many irrelevant search queries showing in the Search Terms Report. ROAS is below target. What is your systematic approach to fix this? | ["Download Search Terms Report and analyze irrelevant queries \u2014 add them as negative keywords immediately", "Shift budget/bids toward exact and phrase match variants of high-converting terms", "If using broad match, ensure Smart Bidding is active (broad match works best with Smart Bidding signals)", "Create a negative keyword list and apply it at campaign/account level", "Group tightly themed keywords into separate ad groups for better relevance", "Review and tighten audience signals if using Performance Max or broad match with audiences", "Set up ongoing Search Terms monitoring (weekly review)"] | Broad match without Smart Bidding and negative keywords is a budget drain. The fix is systematic negative keyword management + match type tightening. | ["broad_match", "negative_keywords", "search_terms", "roas"] | |||||
google_018 | google_ads | smart_bidding | medium | mcq | What is a 'Seasonality Adjustment' in Google Smart Bidding and when should you use it? | A setting to pause campaigns during holidays | A tool to tell Google's algorithm to expect a temporary change in conversion rate during a known short event (1–7 days), helping it adjust bids proactively | An automated budget increase during Q4 | A bid modifier applied to specific days of the week | B | [] | Seasonality Adjustments signal to Smart Bidding that you expect a short-term conversion rate change (e.g., a sale or flash event). Google uses this to adjust bids before and during the event, reducing the algorithm's lag time. | ["seasonality_adjustment", "smart_bidding", "promotions", "events"] |
google_019 | google_ads | performance_max | medium | mcq | What are 'Asset Groups' in Performance Max campaigns? | Sets of keywords grouped by theme | Collections of creative assets (images, videos, headlines, descriptions) that Google combines to serve ads across channels | Budget groups that allocate spend by channel | Audience segments targeted within a PMax campaign | B | [] | Asset Groups are the core unit of PMax campaigns — collections of text, image, and video assets that Google's AI assembles into ads across Search, Display, YouTube, Gmail, and Maps. | ["performance_max", "asset_groups", "creative"] |
google_020 | google_ads | attribution | hard | action_based | A Google Ads account is using Last Click attribution. The brand runs Search, Display, and YouTube campaigns. The CFO wants to cut Display and YouTube because they show poor ROAS on last-click. What is your response and recommendation? | ["Last-click attribution systematically under-credits upper-funnel channels (Display, YouTube) that assist conversions", "Display and YouTube likely contribute significantly to awareness and consideration \u2014 converting users may have had a display/YouTube touchpoint before the converting search click", "Recommend switching to Data-Driven Attribution (DDA) to get a more accurate view of each channel's contribution", "Run a Google Ads Attribution report to see 'Assisted Conversions' for Display and YouTube", "Run YouTube/Display Lift Studies to measure incrementality", "Present the full customer journey data before making budget cuts", "Cutting upper-funnel may reduce future search demand over time"] | Last-click kills upper-funnel investment decisions. DDA and assisted conversion analysis are the right tools to defend upper-funnel spend. | ["attribution", "last_click", "dda", "upper_funnel", "youtube", "display"] | |||||
google_021 | google_ads | search_campaigns | medium | mcq | What is Ad Rank and how is it calculated? | Ad Rank = Bid × CTR | Ad Rank = Max CPC Bid × Quality Score × Expected impact of ad extensions and formats | Ad Rank = Quality Score ÷ Competitor bids | Ad Rank = Daily budget × conversion rate | B | [] | Ad Rank determines ad position and CPC. It factors in your bid, Quality Score (CTR, relevance, landing page), and the expected impact of ad extensions — meaning higher quality ads can win better positions at lower CPCs. | ["ad_rank", "quality_score", "auction", "fundamentals"] |
google_022 | google_ads | campaign_structure | medium | mcq | When should you use a separate brand campaign in Google Ads vs. including brand keywords in a non-brand campaign? | Always combine brand and non-brand keywords in one campaign for efficiency | Separate brand into its own campaign to control budget, bidding, and messaging independently — brand terms often have high QS and low CPC but very different ROAS than non-brand | Only separate brand campaigns for companies with over $1M in ad spend | Brand campaigns are only needed when competitors bid on your name | B | [] | Brand campaigns have fundamentally different economics (low CPC, high CTR, high ROAS) and strategic purpose. Separating them gives precise budget control and prevents high-volume brand traffic from skewing non-brand performance data. | ["brand_campaign", "campaign_structure", "brand_vs_nonbrand"] |
google_023 | google_ads | smart_bidding | hard | mcq | A retailer's Target ROAS campaign achieved 420% ROAS (above the 400% target) but spent only 60% of the monthly budget. What is the trade-off and how do you optimize? | The campaign is performing perfectly — no action needed | High ROAS but under-delivery indicates the tROAS target is too restrictive. Lower tROAS target slightly to unlock more auction participation and volume | Increase daily budget to force more spend | Switch to manual CPC to control delivery | B | [] | Over-achieving tROAS while under-spending signals the algorithm is being too selective. There may be incremental revenue available at slightly lower (but still profitable) ROAS. Lower the tROAS target incrementally to scale volume while maintaining profitability. | ["target_roas", "efficiency_vs_volume", "scaling", "smart_bidding"] |
google_024 | google_ads | keyword_strategy | medium | mcq | What is the recommended approach for structuring keywords when using Smart Bidding in Google Ads? | Use only exact match keywords with manual CPC for maximum control | Use a mix of broad match and exact/phrase match with Smart Bidding — broad match provides signal breadth, Smart Bidding handles bid optimization | Use only phrase match keywords to balance control and reach | Avoid keywords entirely and use Dynamic Search Ads | B | [] | Google's own guidance is that broad match + Smart Bidding is a powerful combination — broad match finds new relevant queries, while Smart Bidding optimizes bids based on conversion signals. This requires strong negative keyword management. | ["broad_match", "smart_bidding", "keyword_strategy", "best_practices"] |
google_025 | google_ads | search_campaigns | hard | action_based | A Google Ads account you've inherited has 500+ keywords spread across 50 ad groups in one campaign, all using manual CPC, no extensions, and a Quality Score averaging 4/10. Monthly spend is $50K with CPA 80% above target. Create a prioritized 30-day action plan. | ["Week 1: Audit \u2014 Download Search Terms Report, keyword QS breakdown, and identify top 20% of keywords driving 80% of conversions (Pareto principle)", "Week 1: Immediate wins \u2014 Add negative keywords from irrelevant search terms, pause zero-conversion keywords with significant spend", "Week 2: Restructure \u2014 Consolidate to tightly themed ad groups (5\u201320 keywords each); create new RSA ads with keyword in headline 1", "Week 2: Add all ad extensions (sitelinks, callouts, structured snippets, call extensions)", "Week 3: Switch to Smart Bidding \u2014 Start with 'Maximize Conversions' for 2 weeks to gather signal", "Week 3\u20134: Landing page audit \u2014 ensure pages match keyword intent, improve page speed", "Week 4: Transition to Target CPA (at 20% above current CPA) once Smart Bidding data is collected", "Ongoing: Weekly Search Terms review and negative keyword additions"] | Account audits require a systematic, prioritized approach. Quick wins (negatives, pausing waste) fund improvements. Structure fixes improve QS and lower CPCs. | ["account_audit", "action_plan", "quality_score", "restructure", "smart_bidding"] | |||||
google_026 | google_ads | performance_max | hard | mcq | How should you prevent Performance Max from cannibalizing traffic from your existing Search campaigns? | It is impossible — PMax always takes priority over all campaigns | Run PMax and Search in separate accounts | Exclude brand keywords from PMax using account-level negative keywords, and ensure Search campaigns use exact/phrase match on key terms — Search campaigns take priority over PMax for matching queries | Set PMax budget to 0% of total and only use it for Display | C | [] | Google's policy: if a Search campaign has an exact-match keyword that matches a query, it takes priority over PMax. Brand exclusions via negative keyword lists and strong match type discipline on Search protect against PMax cannibalization. | ["performance_max", "cannibalization", "search_priority", "brand"] |
google_027 | google_ads | attribution | medium | mcq | What is the Google Ads conversion window and why does it matter? | The time window during which your ads are eligible to show | The maximum number of conversions counted per click | The period after an ad click or view during which a conversion is counted and attributed to that ad interaction | The reporting delay time for conversion data | C | [] | The conversion window (default 30 days for clicks) determines how long after an ad interaction a conversion gets credited to that campaign. Longer sales cycles (e.g., B2B SaaS) benefit from longer windows (up to 90 days). | ["conversion_window", "attribution", "measurement"] |
google_028 | google_ads | search_campaigns | medium | mcq | A Google Ads search campaign for a B2B SaaS company shows high conversion volume but the sales team reports low lead quality. What is the most likely fix? | Increase budget to get more leads | Switch to a pure ROAS goal | Implement lead scoring and pass qualified lead value back to Google Ads as conversion events, then optimize for high-value conversions instead of all leads | Reduce keyword bids to get cheaper leads | C | [] | Optimizing for all form submissions equally trains Smart Bidding to maximize low-quality leads. Passing qualified lead or MQL/SQL values back into Google Ads (via import or CAPI for Google) lets the algorithm optimize for valuable leads. | ["lead_quality", "conversion_value", "saas", "smart_bidding", "lead_scoring"] |
google_029 | google_ads | smart_bidding | easy | mcq | What is Enhanced CPC (ECPC) in Google Ads? | A fully automated bidding strategy | A manual CPC modifier that automatically adjusts your manual bids up or down based on conversion likelihood | A budget optimization tool | A bid strategy only for Display campaigns | B | [] | ECPC is a semi-automated strategy: you set manual bids, and Google adjusts them up or down (up to 100% increase) based on the likelihood of conversion for each auction. It's a stepping stone between manual and fully automated bidding. | ["ecpc", "enhanced_cpc", "smart_bidding", "manual_cpc"] |
google_030 | google_ads | performance_max | medium | action_based | An e-commerce brand wants to launch Performance Max for their Google Ads for the first time. They have a catalog of 500 products. What key setup steps do you recommend to maximize PMax performance from launch? | ["Feed quality: Ensure Google Merchant Center product feed is complete, accurate, and has high-quality images and titles", "Asset Groups: Create separate asset groups by product category with relevant headlines, descriptions, images, and videos", "Audience Signals: Upload customer match lists, website visitor lists, and high-value customer segments as audience signals", "Conversion Tracking: Ensure purchase conversion tracking is working correctly with accurate revenue values", "Budget: Start with enough budget to exit the learning phase quickly (recommended: enough for 50+ conversions in the first month)", "Brand exclusions: Exclude brand search terms from PMax and run a separate brand Search campaign", "URL expansion: Review URL expansion settings and exclude URLs that should not receive traffic (e.g., blog, careers)"] | PMax performance is heavily dependent on feed quality, conversion signal quality, and strong audience signals at launch. | ["performance_max", "setup", "merchant_center", "launch", "audience_signals"] | |||||
ct_001 | critical_thinking | data_interpretation | medium | mcq | A campaign's CPC increased 35% MoM while CTR decreased 20% and impressions stayed flat. Conversion rate and CPA remained the same. What is the most accurate interpretation? | The campaign is becoming less efficient and should be paused | Increased CPC and lower CTR suggests higher auction competition; stable conversion rate means qualified visitors still convert well — monitor but no immediate alarm | Creative fatigue is causing the CPC increase | The budget is too low | B | [] | CPC increases with flat impressions + lower CTR typically indicates increased competition (others bidding more for the same auctions). The fact that CPA is stable means the people who do click are still converting — the core funnel is healthy. This warrants monitoring and possibly a competitive audit, not panic. | ["cpc", "ctr", "competition", "data_interpretation"] |
ct_002 | critical_thinking | data_interpretation | hard | action_based | You are given the following 30-day performance data for a DTC brand:
- Meta: Spend $50K, Revenue $175K, ROAS 3.5x, New Customer %: 45%
- Google Brand: Spend $5K, Revenue $80K, ROAS 16x, New Customer %: 10%
- Google Non-Brand: Spend $15K, Revenue $45K, ROAS 3.0x, New Customer %: 70%
- Total: Spend $70K, Revenue $300K, Blended ROAS 4.3x
The CFO wants to cut Meta and double Google Brand budget to improve blended ROAS. Analyze and respond. | ["Google Brand's 16x ROAS is misleading \u2014 these users already know the brand and would likely convert organically; it captures demand, not creates it", "Meta is the primary demand-creation channel (45% new customers at $50K spend) \u2014 cutting it would reduce top-of-funnel and eventually hurt Google Brand search volume", "Google Non-Brand is genuinely strong (70% new customers, 3.0x ROAS) \u2014 this is the channel to invest in alongside Meta", "Doubling Google Brand budget would likely have diminishing returns quickly (limited branded search volume)", "Recommend: maintain or grow Meta (demand creation), grow Google Non-Brand (high-intent acquisition), and keep Google Brand at a controlled floor", "Recommend running incrementality tests (Meta Lift Study) before cutting Meta to quantify true contribution", "Present the CFO with a full-funnel view, not just ROAS by channel"] | Brand search ROAS is a vanity metric. Cutting demand-creation channels to improve blended ROAS is a classic mistake that erodes future growth. | ["roas", "attribution", "full_funnel", "budget_strategy", "brand_vs_nonbrand", "incrementality"] | |||||
ct_003 | critical_thinking | budget_allocation | hard | action_based | A DTC brand has a $100K/month media budget. They currently split it: 70% Meta, 30% Google. Both channels are hitting CPA targets. They want to scale revenue by 30% in Q3. How would you approach the budget allocation decision? | ["Analyze marginal efficiency \u2014 which channel can absorb more spend before CPA degrades? Look at Impression Share Lost to Budget (Google) and frequency/reach saturation (Meta)", "Google: if IS Lost to Budget is high and CPA is on-target, there is clearly more budget-efficient inventory available \u2014 increase Google first", "Meta: assess frequency and audience size. If frequency is low (<3) and reach is not saturated, Meta can scale", "Consider channel synergy: Google often captures demand created by Meta \u2014 reducing Meta to fund more Google may backfire", "Run budget simulator tools (Google's bid simulator, Meta's Reach & Frequency planner) to model expected volume at higher budgets", "Consider new channels (TikTok, YouTube) for incremental reach if primary channels are saturating", "Scale in increments (15\u201320% at a time) and measure CPA impact at each step"] | Scaling requires understanding marginal efficiency, saturation signals, and channel interplay — not just doubling what works. | ["budget_allocation", "scaling", "channel_mix", "marginal_efficiency"] | |||||
ct_004 | critical_thinking | data_interpretation | medium | mcq | A campaign shows a 50% drop in conversions week-over-week with no changes to ads, budget, or targeting. What should you check FIRST? | Refresh all ad creatives immediately | Check conversion tracking — verify the pixel/tag is firing correctly and no site changes broke tracking | Increase budget by 50% to recover conversion volume | Switch bid strategy to Maximize Conversions | B | [] | A sudden drop with no campaign changes almost always points to a tracking issue first. A site update, checkout flow change, or pixel breakage would cause exactly this pattern. Verify tracking before diagnosing campaign issues. | ["tracking", "conversion_drop", "troubleshooting", "diagnosis"] |
ct_005 | critical_thinking | data_interpretation | hard | mcq | You run a Meta campaign with 3 ad sets targeting different audiences. Ad Set A shows ROAS 6.0x, Ad Set B shows ROAS 2.5x, Ad Set C shows ROAS 1.8x over 30 days. What is the risk of simply pausing B and C and putting all budget into A? | No risk — you should always put budget into the best performer | Ad Set A may be retargeting a warm audience (inflating ROAS), while B and C may be driving new customer acquisition. Cutting B and C may starve A of future customers. | Ad Set A will immediately start underperforming once budget is increased | You will get flagged by Meta for account consolidation violations | B | [] | High ROAS ad sets are often retargeting warm audiences (existing site visitors, engagers) who would have converted anyway. Mid-ROAS ad sets often drive new customer acquisition. A full-funnel view with new customer % is needed before making this decision. | ["roas", "new_customers", "full_funnel", "retargeting", "prospecting", "budget_trap"] |
ct_006 | critical_thinking | competitive_analysis | medium | action_based | A competitor has dramatically increased their Google Ads spend this month (visible in Auction Insights report — their impression share jumped from 20% to 55%). Your CPCs have risen 25% and ROAS has declined. What is your response strategy? | ["Do not panic-bid into a CPC war \u2014 this rarely wins and drains budget", "Analyze: Which keywords are most affected? Use Auction Insights by keyword/ad group to identify where the competitive pressure is highest", "Protect high-value terms: ensure you maintain strong impression share on your most converting, highest-revenue keywords", "Consider non-keyword differentiation: improve Quality Score (reduces CPC without higher bids), add extensions, improve landing pages", "Identify keywords where you can cede ground without major revenue impact and reallocate budget to defended terms", "Run a competitor ad analysis (Meta Ad Library, Google Ads Transparency) to understand their messaging", "Consider whether the CPA increase is still within target \u2014 if so, hold position; if CPA is unsustainable, reassess budget allocation"] | Competitive responses require surgical precision, not emotional bidding wars. Quality Score improvement is the most cost-effective defense. | ["competitive_analysis", "auction_insights", "cpc", "defense_strategy"] | |||||
ct_007 | critical_thinking | budget_allocation | hard | action_based | It's October 15th. A DTC brand has $200K remaining in their annual marketing budget and Q4 (Black Friday, Cyber Monday, Christmas) ahead. How do you structure the budget allocation? | ["Allocate the majority (60\u201370%) to the Nov 1 \u2013 Dec 15 peak window (BFCM + pre-Christmas)", "Reserve 10\u201315% for early October warm-up (build retargeting pools, test creative) before CPMs spike", "Reserve 15\u201320% for post-BFCM (Dec 1\u201315) when CPMs drop and intent remains high", "Identify which channel (Meta vs Google) has historically performed better in Q4 for this brand", "Pre-build audiences: start collecting retargeting pools (website visitors, video viewers) early so BFCM campaigns have warm audiences", "BFCM CPMs are highest industry-wide \u2014 focus on efficiency: use existing customer data, LAL, and high-intent search terms", "Set up Seasonality Adjustments in Google Smart Bidding for BFCM dates", "Have creative ready 2 weeks before launch \u2014 do not be creating ads during BFCM"] | Q4 budget allocation requires advance planning: building audiences early, reserving budget for peak windows, and being campaign-ready before CPMs spike. | ["q4", "bfcm", "budget_planning", "seasonality", "holiday"] | |||||
ct_008 | critical_thinking | data_interpretation | medium | mcq | A Meta campaign's CPM increased 60% YoY during Q4, but ROAS held steady. What does this indicate? | The campaign targeting is broken and targeting wrong audiences | Q4 auction competition drove up CPMs industry-wide, but the campaign's creative quality and audience relevance maintained conversion efficiency | The campaign is wasting budget on irrelevant impressions | Meta's algorithm changed and is no longer working correctly | B | [] | Q4 CPM inflation is normal across all advertisers due to high demand. Maintaining ROAS despite higher CPMs indicates strong creative, audience quality, and conversion rate — a healthy outcome. | ["cpm", "q4", "roas", "seasonality", "data_interpretation"] |
ct_009 | critical_thinking | data_interpretation | hard | action_based | You have two Meta ad variants running in the same ad set for 14 days:
- Ad A: Impressions 50K, CTR 2.1%, CPC $1.20, Conversions 45, CPA $33, ROAS 4.2x
- Ad B: Impressions 18K, CTR 3.8%, CPC $0.95, Conversions 12, CPA $38, ROAS 3.6x
Meta is allocating 73% of budget to Ad A. Should you override Meta and pause Ad A to give Ad B more budget? Analyze. | ["Do not prematurely conclude Ad B is better based on CTR alone \u2014 CPA and ROAS are what matter for performance marketing", "Ad A has a lower CPA ($33 vs $38) and higher ROAS (4.2x vs 3.6x) \u2014 it is the better performing ad on the key metrics", "Ad B has better CTR and CPC but converts less efficiently \u2014 it may attract clicks but not buyers", "Meta's allocation to Ad A is algorithmically justified based on conversion data", "Ad B may need more time/impressions (at least 50 conversions) before a statistically valid conclusion can be drawn", "Recommendation: Let Meta continue optimizing. Give Ad B more time before pausing. Do not override the algorithm based on CTR/CPC alone.", "If Ad B does not improve CPA after 30+ conversions, consider pausing it"] | CTR and CPC are vanity metrics in conversion campaigns. CPA and ROAS are what matter. Meta's algorithm is often right when allocating based on conversion data. | ["creative_testing", "statistical_significance", "ctr_vs_cpa", "algorithm_trust"] | |||||
ct_010 | critical_thinking | budget_allocation | medium | mcq | A performance marketer notices their Google non-brand CPA is $45 (within target) but Meta CPA is $80 (above the $60 target). They propose cutting Meta entirely. What is wrong with this reasoning? | Nothing — you should always cut channels that miss CPA targets | Meta's CPA on last-click may be inflated due to view-through and cross-channel journeys; more importantly, cutting Meta may reduce Google search demand over time as brand awareness drops | Meta CPA should never be compared to Google CPA | The target CPA of $60 on Meta is unrealistic and should be raised to $100 | B | [] | Meta's attributed CPA is often higher than Google's because Meta creates demand (upper funnel) while Google captures it. The two channels are interdependent. Cutting Meta often leads to declining Google non-brand search volume within 60–90 days. | ["channel_interdependency", "upper_funnel", "meta_vs_google", "cpa"] |
ct_011 | critical_thinking | data_interpretation | medium | mcq | What does a low Conversion Rate (CVR) combined with a low CPC and high CTR most likely indicate? | The ad creative is the main problem | The ad is attracting clicks but the landing page is failing to convert — CVR is a post-click metric | The targeting is too broad, attracting irrelevant traffic | The campaign budget is too low | B | [] | High CTR + Low CVR means people are interested enough to click (ad is working) but something is breaking post-click. This points to landing page issues: relevance mismatch, poor UX, slow load time, weak offer, or wrong CTA. | ["conversion_rate", "landing_page", "ctr", "funnel_analysis"] |
ct_012 | critical_thinking | competitive_analysis | medium | mcq | A brand's Facebook Ad Library shows a competitor running the same ad creative for 6+ months continuously. What does this typically indicate? | The competitor forgot to update their ads | The creative is likely a proven 'evergreen' performer — long-running ads on Meta are almost always profitable | The competitor has a small creative team and can't make new ads | Meta is not delivering those ads anymore | B | [] | On Meta, advertisers pay to run ads. Running the same creative for 6+ months almost always means it is profitable and outperforming newer variations. This is a valuable competitive signal — study the messaging, offer, and format. | ["competitive_analysis", "facebook_ad_library", "creative_intelligence", "evergreen"] |
ct_013 | critical_thinking | data_interpretation | hard | action_based | A subscription box company runs Meta and Google campaigns. Monthly reports show:
- Total ad spend: $80K
- Total revenue attributed: $400K (5x blended ROAS)
- Actual subscription revenue growth: flat
- New subscriber count: declining 5% MoM
What is the diagnosis and what actions do you recommend? | ["The high blended ROAS (5x) with flat/declining new subscribers suggests most revenue is being credited to existing customer repurchases, not new acquisition", "Channels are likely over-attributing existing customer revenue (subscription renewals) that would have happened without ads", "Diagnosis: spending heavily to 're-acquire' or re-engage existing subscribers rather than finding new ones", "Action: segment reporting to separate new subscriber CAC from existing customer ROAS", "Action: ensure retargeting/existing customer exclusions are in place on prospecting campaigns", "Action: optimize toward new subscriber as the conversion event, not total revenue", "Action: run incrementality tests to understand true ad-driven new subscriber volume", "Action: review LTV-to-CAC ratio, not just ROAS"] | High platform ROAS + flat business growth = attribution problem. The metrics are lying. New customer acquisition is the real KPI for subscription businesses. | ["subscriptions", "new_customers", "roas_vs_growth", "attribution", "cac", "ltv"] | |||||
ct_014 | critical_thinking | budget_allocation | medium | mcq | What is the key difference between ROAS (Return on Ad Spend) and MER (Marketing Efficiency Ratio)? | They are the same metric with different names | ROAS measures channel-level attributed revenue per ad dollar; MER measures total business revenue divided by total marketing spend — a blended, attribution-model-independent view | MER is only used for offline marketing; ROAS is for digital | ROAS includes all business costs; MER only includes ad spend | B | [] | MER (or nMER for new customers) = Total Revenue ÷ Total Ad Spend. It sidesteps attribution model disputes and gives a business-level view of marketing efficiency. It is increasingly used alongside ROAS as a northstar metric. | ["mer", "roas", "northstar_metrics", "attribution"] |
ct_015 | critical_thinking | data_interpretation | hard | mcq | Which statistical concept is most important to understand before declaring a winner in an A/B creative test? | Correlation coefficient | Statistical significance and sufficient sample size — results must reach p<0.05 with enough conversions to be reliable | Standard deviation of CPM | Moving average of ROAS | B | [] | Statistical significance (typically p<0.05, meaning less than 5% probability the result is random) and adequate sample size are critical. Calling a winner after 5 conversions is meaningless. Most marketers end tests too early, leading to false conclusions. | ["ab_testing", "statistical_significance", "sample_size", "experimentation"] |
ct_016 | critical_thinking | data_interpretation | medium | mcq | A Google Ads campaign shows a sudden spike in spend and conversions on a Wednesday. The day-of-week analysis shows Wednesdays historically perform poorly. What should you check first? | Immediately pause the campaign and investigate | Check for accidental bid or budget changes, a competitor pausing, or a promotional email sent by the brand that drove website traffic | Assume it's a Google algorithm update and do nothing | Double the budget on Wednesdays going forward | B | [] | Unexplained spikes should be diagnosed before acted upon. Common causes: bid/budget change, a promotional email/push driving site traffic that converts on paid, a competitor pausing (your IS increases), or a tracking change. Check change history first. | ["anomaly_detection", "data_spikes", "change_history", "diagnosis"] |
ct_017 | critical_thinking | budget_allocation | hard | action_based | Explain the concept of 'payback period' in performance marketing and how it should influence media budget decisions for a subscription business with a 12-month LTV. | ["Payback period = time to recover CAC from a customer's revenue. If CAC is $90 and monthly revenue is $30, payback period is 3 months", "For subscription businesses, CPA targets should be based on LTV, not first-purchase revenue", "A brand optimizing for first-purchase ROAS on a subscription product will massively underspend and underacquire", "Recommended approach: set CPA target based on 12-month LTV with acceptable payback period (e.g., 6-month payback on 12-month LTV)", "Example: if 12M LTV = $360, acceptable CAC at 6M payback = $180 (vs much lower if optimizing for first purchase only)", "This unlocks more spend, more growth, and potentially profitable long-term outcomes", "Risk: cash flow \u2014 higher upfront CAC requires longer cash runways; balance growth with cash position"] | LTV-based CPA targeting is fundamental for subscription businesses. First-purchase ROAS optimization leads to systematic underspending and growth limitations. | ["ltv", "cac", "payback_period", "subscriptions", "budget_strategy"] | |||||
ct_018 | critical_thinking | competitive_analysis | medium | mcq | What tool can you use to see what Google Search ads your competitors are currently running? | Google Analytics Competitor Analysis | Google Ads Transparency Center (ads.google.com/transparency) | Google Search Console | Google Keyword Planner | B | [] | Google Ads Transparency Center shows active text ads from verified advertisers. Combined with tools like Semrush or SpyFu, marketers can understand competitor ad copy, keywords, and strategies. | ["competitive_analysis", "transparency_center", "competitor_ads"] |
ct_019 | critical_thinking | data_interpretation | easy | mcq | What does ROAS stand for and how is it calculated? | Return on Ad Spend = Revenue ÷ Ad Spend | Rate of Ad Success = Conversions ÷ Clicks | Revenue Over Ad Strategy = Profit ÷ Total Budget | Return on Asset Sales = Total Sales ÷ Ad Impressions | A | [] | ROAS = Revenue ÷ Ad Spend. A ROAS of 4x means you generate $4 in revenue for every $1 spent on ads. It is the primary efficiency metric in e-commerce paid media. | ["roas", "fundamentals", "kpis"] |
ct_020 | critical_thinking | budget_allocation | hard | mcq | A brand is profitable at a 3.5x ROAS and wants to know the break-even ROAS. Their product sells for $100, COGS is $30, fulfillment/shipping is $10, and other variable costs are $5. What is the break-even ROAS? | 1.0x | 1.8x | 2.22x | 3.0x | C | [] | Break-even ROAS = Revenue ÷ Gross Profit per order. Gross Profit = $100 - $30 (COGS) - $10 (fulfillment) - $5 (variable) = $55. Break-even ROAS = $100 ÷ $55 = 1.818x. The closest is 2.22x is incorrect — actual answer is 1.82x. However in common usage, break-even ROAS = 1/gross margin. Gross margin = $55/$100 = 55%. Break-even ROAS = 1/0.55 = 1.82x. Answer B (1.8x) is the closest correct answer. | ["break_even_roas", "unit_economics", "profitability", "cogs"] |
ab_001 | action_based | campaign_optimization | medium | action_based | You manage a $30K/month Meta budget for a fashion brand. ROAS has dropped from 4.5x to 2.8x over the last 3 weeks with no campaign changes. Outline your diagnostic and optimization process. | ["Step 1 \u2014 Rule out tracking issues: verify pixel is firing, check Events Manager for data quality issues", "Step 2 \u2014 Check external factors: did the website change? Is checkout working? Check Shopify/GA4 for overall CVR changes", "Step 3 \u2014 Creative fatigue: check frequency (>3\u20134 is a red flag), CPM trends, CTR decline. If creative fatigue, launch new creative immediately", "Step 4 \u2014 Audience saturation: are audiences too small? Is the retargeting pool shrinking?", "Step 5 \u2014 Competition/CPM: check CPM trends \u2014 if CPMs rose significantly, the same ad spend buys fewer impressions", "Step 6 \u2014 Seasonal factors: is there a Q4/seasonality/competitor promotional event happening?", "Step 7 \u2014 Based on diagnosis, take targeted action: new creative, audience refresh, budget reallocation, or bid strategy adjustment"] | Systematic diagnosis before action prevents wasted effort. The diagnostic ladder moves from external/tracking → creative → audience → market. | ["roas_decline", "diagnosis", "troubleshooting", "optimization"] | |||||
ab_002 | action_based | troubleshooting | hard | action_based | A client's Google Ads account shows 0 conversions in the last 7 days despite normal click volume and spend. Everything was working fine last week. What is your step-by-step investigation? | ["Step 1: Check Google Ads Change History \u2014 any recent campaign, bid, or setting changes?", "Step 2: Check Google Tag Manager / conversion tag \u2014 did a website deployment break the conversion tag?", "Step 3: Use Google Tag Assistant or Test Conversions in Google Ads to verify the tag fires on the thank-you/confirmation page", "Step 4: Check if the conversion action is still active and correctly configured in Google Ads (not paused, correct URL, correct event)", "Step 5: Review Google Analytics \u2014 are there sessions and goal completions recorded? If GA shows conversions but Google Ads shows 0, it's a tagging issue", "Step 6: Check if a website checkout change removed or changed the confirmation page URL", "Step 7: Check for ad disapprovals that might have reduced actual ad serving", "Step 8: If tracking confirmed broken, immediately restore previous tag/code and alert the client"] | Zero conversions with normal clicks = almost always a tracking/tag issue. Systematic tag verification is step one. | ["zero_conversions", "conversion_tracking", "troubleshooting", "tags"] | |||||
ab_003 | action_based | scaling_decisions | hard | action_based | A Meta campaign for a B2C app has been running for 60 days, spending $500/day, CPA of $12 (target: $15), and 40 installs/day. The client wants to scale to $5,000/day. What is your scaling plan? | ["Do not jump from $500 to $5,000 in one step \u2014 this will reset learning and likely spike CPA", "Scale in 20\u201330% increments every 3\u20135 days: $500 \u2192 $650 \u2192 $850 \u2192 $1,100 \u2192 $1,450 \u2192 $1,850 \u2192 $2,400 \u2192 $3,100 \u2192 $4,000 \u2192 $5,000", "Monitor CPA at each step \u2014 if CPA exceeds target ($15), pause increases and stabilize", "Audience expansion: as spend increases, current audiences will saturate. Expand to new LAL sizes, broader interests, Advantage+ Audience", "Creative pipeline: prepare 3\u20135 new creative variations to replace fatiguing ads at scale", "Consider horizontal scaling: duplicate ad sets with new audiences rather than just increasing budget on one", "At $5K/day, audience reach requirements are much higher \u2014 ensure total addressable audience is large enough", "Use Meta's scaling tools: CBO with multiple ad sets, Advantage+ Audience at scale"] | Scaling requires gradual budget increases, audience expansion, and creative pipeline management. Speed kills performance. | ["scaling", "budget_increase", "audience_expansion", "creative_pipeline"] | |||||
ab_004 | action_based | campaign_optimization | medium | action_based | A Google Shopping campaign has strong ROAS (5x) on top 20 products but weak performance on the remaining 200 products (0.8x ROAS). How do you restructure this? | ["Segment campaigns: create a separate 'Hero Products' campaign for the top 20 products with higher bids/ROAS target", "Create a 'Long Tail' campaign for the remaining 200 products with lower ROAS target or Maximize Clicks", "Use Priority settings (High/Medium/Low) in Shopping to control which campaign serves first", "Allocate majority of budget to the Hero Products campaign where ROAS is proven", "For low-performing products: analyze if they have any conversions at all. If zero, consider excluding or using very low bids", "Improve product feed quality for underperforming products (titles, descriptions, images)", "Consider Performance Max with asset groups segmented by product category"] | Product segmentation is the most impactful Shopping optimization. High and low performers need different bid strategies and budgets. | ["google_shopping", "product_segmentation", "campaign_structure", "roas"] | |||||
ab_005 | action_based | troubleshooting | medium | action_based | A Meta campaign targeting cold audiences (LAL 1%) has CPM of $45 and CTR of 0.4%, resulting in CPC of $11.25. Industry benchmark is CPM $18, CTR 1.2%. What are the likely issues and fixes? | ["High CPM ($45 vs $18 benchmark): Could indicate Q4 seasonality, highly competitive audience, or small audience size forcing high CPMs \u2014 check audience size (should be 1M+) and timing", "Low CTR (0.4% vs 1.2%): Creative is not resonating with cold audiences \u2014 the hook, message, or offer is not compelling for first-time viewers", "Fix CPM: broaden audience (LAL 2\u20133%, or Advantage+ Audience), check if running during high-competition period", "Fix CTR: test new creative formats (video vs static), stronger hook in first 3 seconds, better value proposition in headline", "Review ad placement mix: if running in high-CPM placements (Facebook Feed), test Stories/Reels which often have lower CPMs for cold audiences", "Test different creative angles: problem-solution, social proof, UGC-style vs polished production", "Cold audiences need different creative than warm audiences \u2014 ensure creative is built for brand introduction, not retargeting"] | High CPM + Low CTR = both supply (audience/placement) and demand (creative) issues. Fix both simultaneously. | ["cpm", "ctr", "creative_testing", "cold_audiences", "lal"] | |||||
ab_006 | action_based | campaign_optimization | hard | action_based | You inherit a Google Ads account for a SaaS company spending $80K/month. Initial audit reveals: no conversion tracking, only broad match keywords, no negative keywords, no ad extensions, manual CPC bidding, and single ad per ad group. Prioritize a 6-week fix plan. | ["Week 1 (Critical): Set up conversion tracking \u2014 without this, nothing else matters. Implement Google Ads tag for demo/trial sign-up + import GA4 goals", "Week 1: Add account-level negative keywords (jobs, free, DIY, etc.) immediately to stop wasted spend", "Week 2: Add all ad extensions (sitelinks to product pages, callouts, structured snippets, call extension if applicable)", "Week 2\u20133: Keyword audit \u2014 analyze Search Terms Report, identify converting queries, add as exact/phrase match. Pause or lower bids on wasted broad terms", "Week 3: Add RSAs to each ad group (2\u20133 RSAs per ad group with keywords in headlines)", "Week 4: Once 30\u201350 conversions recorded, switch from Manual CPC to Maximize Conversions", "Week 5\u20136: Transition to Target CPA (20% above actual CPA) and begin audience layering (remarketing, customer match)", "Ongoing: Weekly Search Terms review, negative keyword additions"] | Conversion tracking is the absolute foundation — everything else requires it. Follow with quick wins (negatives, extensions) then structural improvements. | ["account_audit", "action_plan", "priority_order", "saas", "conversion_tracking"] | |||||
ab_007 | action_based | reporting_insights | medium | action_based | A CMO asks you to build a weekly performance marketing dashboard. What metrics would you include and how would you structure it? | ["Business metrics (top): Revenue, New Customers, CAC, MER (total revenue / total spend)", "Channel efficiency: ROAS by channel, CPA by channel, CPC, CTR", "Spend and pacing: actual spend vs plan, % budget utilized, projection for month-end", "Trend view: WoW changes with % delta and color coding (green/red) for key metrics", "Quality metrics: Quality Score (Google), Relevance Score (Meta), conversion rate, landing page CVR", "Creative performance: top 3 ads by ROAS/CPA, creative fatigue indicators (frequency, CTR trend)", "Funnel: Impressions \u2192 Clicks \u2192 Conversions with each stage's efficiency", "Anomaly alerts: any metric >20% deviation from prior week or target", "Keep it 1 page / executive-friendly \u2014 link to deeper drill-down reports"] | Good dashboards lead with business outcomes, support with channel metrics, and surface anomalies. CMO-level: business first, tactics second. | ["reporting", "dashboard", "kpis", "cmo", "metrics"] | |||||
ab_008 | action_based | scaling_decisions | hard | action_based | A DTC brand currently spends $20K/month on Meta and $5K/month on Google (brand only). They have a 4.2x blended ROAS (target: 3.5x) and want to grow revenue 50% in 6 months. What is your channel expansion and budget scaling strategy? | ["Current spend is underinvested \u2014 4.2x ROAS above target suggests room to scale before hitting break-even", "Month 1\u20132: Increase Meta budget 25\u201330% (to ~$25\u201326K) and launch Google Non-Brand Search (keywords relevant to product category, $5\u20138K to start)", "Month 2\u20133: Launch Google Performance Max or Shopping if e-commerce ($5\u201310K), monitor CPA vs target", "Month 3\u20134: Based on Google Non-Brand performance, scale winning campaigns 20\u201330% per month", "Month 4\u20135: If Meta and Google are both efficient, consider adding YouTube for upper funnel or TikTok if target demographic is there", "Month 5\u20136: Consolidate learnings, allocate majority of incremental budget to the 2 highest-performing channels", "Target total spend by month 6: ~$45\u201350K/month (150% of current) to achieve 50% revenue growth", "Monitor blended ROAS monthly \u2014 as long as it stays above 3.5x, continue scaling"] | Channel expansion + budget scaling should be methodical: prove new channels at small budgets, then scale winners. Don't expand all channels simultaneously. | ["channel_expansion", "scaling", "budget_strategy", "growth"] | |||||
ab_009 | action_based | campaign_optimization | medium | action_based | Your Meta retargeting campaign (30-day site visitors) shows diminishing returns — CPA doubled over 45 days, frequency is 9.2. What is a full refresh strategy? | ["Immediate: Pause or heavily reduce budget on the fatigued ad set", "Creative refresh: Launch 3\u20135 entirely new creative concepts \u2014 different format (video vs static), different angle (urgency, social proof, different benefit)", "Audience refresh: Expand retargeting windows (60-day, 90-day) to bring in new people to the retargeting pool", "Segmentation: Break the 30-day audience into more specific segments (product page viewers, add-to-cart, initiate checkout) with different messaging", "Exclusions: Exclude purchasers and people who converted to stop wasting budget", "New audience types: Add video viewers (50%, 75%), Instagram/Facebook page engagers as additional retargeting pools", "Consider a 'break' from retargeting this audience for 2\u20133 weeks to reset fatigue", "Test Dynamic Creative Optimization (DCO) with catalog to automatically vary product shown"] | Frequency 9+ = severe fatigue. Full creative refresh + audience expansion + segmentation is required, not just swapping one ad. | ["retargeting", "frequency", "audience_fatigue", "creative_refresh", "segmentation"] | |||||
ab_010 | action_based | reporting_insights | hard | action_based | A performance marketing agency presents a monthly report to a client showing Meta ROAS of 7.2x and Google ROAS of 12.5x. The client is thrilled. As a senior PM, what questions should you ask to validate these numbers? | ["What attribution window is being used? 7-day click + 1-day view inflates ROAS vs 7-day click only", "Does this include view-through conversions? If yes, what % of conversions are view-through?", "Are existing customers included in conversion counting? If so, these are not incremental sales", "What is the new customer %? ROAS on returning customers is misleading for growth measurement", "How does this compare to MER (total revenue / total spend)? If MER is 3x but channel ROAS is 7x+, something doesn't add up", "Has a Lift/Incrementality test been run? Without it, we don't know how much revenue would have occurred organically", "What is the conversion window vs. actual purchase decision time for this product?", "Is there overlap/double-counting between Meta and Google attributed conversions (cross-channel attribution)?", "Request raw data breakdown: attributed revenue by conversion type, attribution model, and new vs returning customers"] | High ROAS numbers often mask attribution gaming. A rigorous PM asks for incrementality proof, attribution model disclosure, and new customer breakdown before celebrating. | ["reporting", "attribution", "roas_validation", "incrementality", "agency_management"] | |||||
ab_011 | action_based | troubleshooting | medium | mcq | A Google Smart Shopping / Performance Max campaign has been running for 2 weeks and has 0 conversions despite $3,000 in spend. What is the most likely issue? | PMax/Smart Shopping never works for small budgets | Conversion tracking is broken, OR the ROAS/CPA target is too aggressive for the learning phase, OR the product feed has critical errors | The campaign needs 6 weeks before generating any conversions | Smart bidding does not work for new accounts | B | [] | Zero conversions with $3K spend in PMax almost always points to: broken conversion tracking (most common), an overly aggressive ROAS target preventing auction participation, or a product feed issue causing no products to serve. Verify each in order. | ["performance_max", "zero_conversions", "troubleshooting", "smart_shopping"] |
ab_012 | action_based | campaign_optimization | hard | action_based | You manage paid search for a B2B SaaS (annual contract value $20K). Current setup: 50 keywords, manual CPC, tracking only 'Contact Form Submissions' as conversions. Lead-to-close rate is 5%. Monthly spend: $15K, leads: 30/month, closed deals: 1–2/month. How do you fundamentally improve this account's ROI? | ["The core problem: optimizing for form fills, not revenue. 30 leads at 5% close rate = 1.5 deals/month. $15K spend for ~$30K ARR is marginally positive but could be much better", "Implement full-funnel conversion tracking: MQL, SQL, Demo Booked, Closed Won \u2014 import CRM data into Google Ads", "Assign conversion values: if ACV = $20K and close rate = 5%, a lead is worth $1,000. Use this to set Target ROAS or conversion value", "Switch to value-based bidding: once revenue data flows back, use Target ROAS (or maximize conversion value) to optimize for deal value, not lead volume", "Keyword audit: focus on high-intent, bottom-funnel keywords (comparison, pricing, alternative, best [category] software) vs. informational queries", "Add negative keywords aggressively to cut non-converting traffic", "Build retargeting lists for website visitors (30, 60, 90 days) and nurture with different messaging", "A/B test landing pages focused on demo booking vs. free trial vs. contact form to improve lead quality"] | B2B paid search ROI requires closing the loop between ad spend and revenue using CRM integration and value-based bidding. | ["b2b_saas", "value_based_bidding", "crm_integration", "lead_quality", "ltv"] | |||||
ab_013 | action_based | scaling_decisions | medium | mcq | When is it appropriate to use Dayparting (ad scheduling) in Google Ads? | Always — you should always restrict ads to business hours | When data shows significantly better CPA or conversion rates during specific hours/days, and you are on manual or ECPC bidding | Only for call-focused campaigns | Dayparting should never be used with Smart Bidding | B | [] | Dayparting makes sense when data-driven performance differences exist by hour/day. With Smart Bidding, Google already adjusts bids by time of day automatically — manual dayparting is less needed and can interfere. It's most useful with manual/ECPC bidding or when you have business constraints (e.g., a call center that's only open 9–5). | ["dayparting", "ad_scheduling", "smart_bidding", "bid_adjustments"] |
ab_014 | action_based | campaign_optimization | medium | action_based | A Meta Lead Generation campaign for a real estate company is generating 200 leads/month at $8 CPA, but the sales team says lead quality is terrible and few convert to viewings. What performance marketing interventions can improve lead quality? | ["Add qualifying questions to the lead form: budget range, timeline, location preference, whether they are pre-approved for mortgage \u2014 friction reduces volume but improves quality", "Switch from 'More Volume' to 'Higher Intent' form type in Meta Lead Ads (requires more friction, improves quality)", "Tighten targeting: narrow audience to high-intent indicators (recently searched for homes, life events like marriage/new job)", "Test landing page lead gen vs. native Meta lead form \u2014 landing page forms with more copy/pre-qualification tend to attract higher-intent leads", "Implement lead scoring: pass sales feedback (qualified/not) back to Meta via CAPI as conversion events, so Meta optimizes for quality leads", "Create a follow-up sequence to qualify leads immediately (auto-SMS/email within 5 minutes of submission)", "A/B test ad copy focusing on serious buyers (e.g., 'For buyers ready to move in 3 months') to self-select intent"] | Lead quality is a systemic issue involving form design, targeting, ad copy framing, and feedback loops to the ad platform. | ["lead_gen", "lead_quality", "meta_lead_ads", "real_estate", "qualification"] | |||||
ab_015 | action_based | reporting_insights | medium | mcq | A brand's Google Analytics shows 10,000 sessions from Meta Ads in a month, but Meta Ads Manager shows 25,000 clicks. What is the most common explanation for this discrepancy? | Meta is lying about clicks to inflate billing | Some Meta clicks land on pages that are slow to load and the user bounces before GA fires; also Meta counts clicks on reactions, comments, profile clicks — not all are link clicks | Google Analytics is broken | UTM parameters are not set on the Meta ads | D | [] | The most common cause of Meta clicks ≠ GA sessions is missing or broken UTM parameters on the Meta ad URLs. Without UTM tags, GA cannot attribute sessions to Meta and assigns them to direct or organic. Answer D is the primary fix: always add UTM parameters to Meta ads. | ["utm", "tracking", "meta_vs_ga", "attribution_discrepancy"] |
ab_016 | action_based | campaign_optimization | hard | action_based | You are launching a new product (no historical data) on both Meta and Google Ads with a $30K launch budget for the first month. How do you structure the launch? | ["Budget split suggestion: Meta 60% ($18K), Google Brand 10% ($3K), Google Non-Brand Search 20% ($6K), reserved for optimization 10% ($3K)", "Meta: Start with Advantage+ Shopping Campaign or broad targeting + Lowest Cost bid to gather data. Test 3\u20135 different creative angles (product benefit, problem-solution, social proof)", "Google Non-Brand: Start with exact/phrase match on high-intent category keywords. Use Maximize Conversions until data accumulates", "Conversion tracking: set up before spend day 1 \u2014 pixel, Google tag, and conversion events (Add to Cart, Initiate Checkout, Purchase)", "Week 1\u20132: Focus on data collection. Do not optimize prematurely \u2014 let campaigns run to gather initial signals", "Week 3: First optimization pass \u2014 pause non-performing creatives, add negative keywords, identify best-performing audiences", "Week 4: Scale winning ad sets/campaigns with 20\u201330% budget increase. Test Lookalike Audiences based on early purchaser data", "Set up a retargeting campaign from day 1 (even small audience) \u2014 capture early site visitors", "Document learnings: which creative, audience, and message resonated \u2014 foundation for month 2"] | New product launches require a data-gathering phase before optimization. Structure matters, but patience is equally important. | ["product_launch", "new_campaign", "budget_allocation", "data_collection", "creative_testing"] | |||||
ab_017 | action_based | troubleshooting | medium | action_based | A Meta campaign was approved and running, then suddenly all ads were disapproved overnight. The client is panicking. What is your step-by-step response? | ["Step 1: Log into Meta Ads Manager and check the specific disapproval reason in Account Quality or the ads themselves", "Step 2: Review the specific Meta advertising policy that was violated (common: health claims, financial services, political content, personal attributes, misleading claims)", "Step 3: If the disapproval seems incorrect (false positive), request a review through Meta's appeal process", "Step 4: If the ad legitimately violates policy, edit the ad copy/creative to comply with the policy and resubmit", "Step 5: Check if the account itself has received a restriction or warning \u2014 escalate to Meta support if needed", "Step 6: While resolving, pause spend on disapproved ads to avoid further policy strikes", "Step 7: Brief the client honestly: explain the cause, the fix, and the expected timeline", "Step 8: Proactively create policy-compliant backup creatives for future contingencies"] | Ad disapprovals require systematic policy review, appeal where appropriate, and creative fixes. Never try to circumvent policies. | ["ad_disapprovals", "meta_policy", "account_quality", "compliance"] | |||||
ab_018 | action_based | campaign_optimization | medium | mcq | A Google Display campaign is generating cheap CPCs ($0.30) but very low conversion rate (0.05%). The Search campaign for the same brand converts at 3.5%. What explains this difference? | Display ads are broken and should always be paused | Search captures high-intent users actively looking for the product; Display interrupts users who are not in a buying mindset — fundamentally different intent levels | Display CPCs are so cheap they attract bot traffic | The landing pages are different for Display vs Search | B | [] | Search vs. Display conversion rate difference is entirely expected and driven by user intent. Display is an awareness/consideration channel — it should be measured on view-through attribution, brand lift, or assisted conversions, not direct CVR comparison to Search. | ["display_vs_search", "user_intent", "conversion_rate", "channel_comparison"] |
ab_019 | action_based | scaling_decisions | hard | action_based | A performance marketing team is managing 15 different clients' Google and Meta accounts. Describe a systematic weekly workflow to manage all accounts efficiently without missing critical issues. | ["Automated alerts: Set up custom rules in Meta and Google to send email/Slack alerts for: spend >20% over/under pace, CPA >30% above target, zero conversions for 48hrs, ad disapprovals", "Daily (5 min/account): Scan automated alerts. Check pacing (are we on track to hit monthly spend?). Flag any anomalies.", "Weekly full review (per account, 20\u201330 min): Review WoW performance vs. KPIs (ROAS, CPA, spend). Check Search Terms report (Google). Review frequency and fatigue signals (Meta). Identify top and bottom performers.", "Weekly optimizations: Negative keyword additions, bid adjustments, pausing underperformers, launching new creatives from approved list", "Monthly deep dive: Audit campaign structure, audience refresh, creative strategy review, budget reallocation across channels", "Standardized reporting template: Use Looker Studio / Google Sheets dashboard with auto-updates per client", "SOP documentation: Each account should have a playbook with targets, approved actions, and escalation thresholds", "Client communication: Weekly 1-page report with key metrics, top insights, and recommended actions"] | Scale and efficiency require systems: automated monitoring, standardized workflows, and clear prioritization — not just working harder. | ["agency_workflow", "account_management", "efficiency", "automation", "reporting"] | |||||
ab_020 | action_based | reporting_insights | hard | action_based | A CMO questions why the company is spending $200K/month on Meta when they can't directly see proof it drives incremental revenue. How do you build the business case for Meta spend? | ["Run a Meta Conversion Lift Study: show a statistically valid holdout group, measure revenue difference between exposed and unexposed groups", "Run a geo-based incrementality test: turn off Meta in one region for 4 weeks, compare revenue growth vs. regions where Meta is running", "Show MER (Marketing Efficiency Ratio) trend: when Meta spend was reduced in the past, did MER decline? Show historical correlation", "Present new customer acquisition data: Meta's contribution to new customer count (not just attributed revenue) \u2014 show Meta new customer CPA vs. other channels", "Show assisted conversion data from Google Analytics \u2014 how many converting Google/direct users had a prior Meta touchpoint?", "Build a brand search volume correlation: show that when Meta spend increases, branded Google search volume rises \u2014 demonstrating demand creation", "Frame it in business language: 'At $200K spend, we acquire X new customers at $Y CAC with a $Z LTV. NPV of those customers is...'", "Acknowledge limitations honestly and propose a rigorous test to prove/disprove incrementality"] | Business cases for upper-funnel spend require incrementality proof, not attribution data. Lift studies and MER trends are the gold standard. | ["incrementality", "business_case", "cmo_presentation", "meta_value", "lift_study"] |
PM-AGI Benchmark 🎯
The first open-source LLM benchmark for Performance Marketing.
Developed by hawky.ai — evaluating how well LLMs reason, plan, and act in real-world Meta Ads and Google Ads scenarios.
Dataset Summary
PM-AGI contains 100 expert-crafted questions across 4 categories of performance marketing knowledge:
| Category | Questions | Focus |
|---|---|---|
| Meta Ads | 30 | Campaign structure, targeting, bidding, creative, CAPI, measurement |
| Google Ads | 30 | Search, Smart Bidding, PMax, Quality Score, attribution |
| Critical Thinking | 20 | Data interpretation, budget decisions, competitive analysis |
| Action-Based | 20 | Scenario troubleshooting, optimization, scaling |
Question Types
- MCQ (63 questions) — Single correct answer, scored 1.0 or 0.0
- Action-Based (37 questions) — Open scenario evaluated by LLM judge (0.0–1.0)
Difficulty Distribution
- Easy: 9 questions
- Medium: 50 questions
- Hard: 41 questions
Usage
from datasets import load_dataset
ds = load_dataset("Hawky-ai/pm-agi-benchmark")
print(ds["test"][0])
Evaluate a Model
git clone https://github.com/Hawky-ai/pm-AGI
cd pm-agi-benchmark
pip install -r requirements.txt
python evaluate.py --model gpt-4o --provider openai --api-key YOUR_KEY
Leaderboard
Citation
@misc{pmagi2025,
title={PM-AGI: A Performance Marketing Benchmark for Large Language Models},
author={hawky.ai},
year={2025},
url={https://huggingface.co/datasets/Hawky-ai/pm-agi-benchmark}
}
License
MIT — see LICENSE
- Downloads last month
- 7
Size of downloaded dataset files:
63.2 kB
Size of the auto-converted Parquet files:
63.2 kB
Number of rows:
100