Why ‘Poor’ Ad Scores Don’t Mean Poor Performance (And Why You Shouldn’t Panic)

If you’ve ever logged into Google or Meta Ads and seen a bright red warning that your campaign performance is Poor or Below Average, you might have felt a twinge of panic. But take a deep breath: these ratings are not a reflection of your actual results or our agency’s performance. They’re automated, algorithm-driven suggestions that often benefit the platform more than the advertiser.

What These Scores Actually Measure

Both Google and Meta use internal scoring systems to rank how “optimized” your campaigns are. You might see these labeled as:

Quality Score (Google)

Optimization Score (Google)

Account Quality / Opportunity Score (Meta)

These scores take into account factors like:

Ad relevance

Expected click-through rate (CTR)

Landing page experience

Account history

How many of the platform’s “recommended” features you’re using

But here’s the catch: the formulas behind these scores are proprietary and opaque. Even experienced media buyers don’t know exactly how they’re calculated or weighted. What we do know is that the scores are designed to nudge advertisers toward using more of the platform’s automated features – and often, toward spending more money.

 

Why You Shouldn’t Panic When You See “Poor”

A low score doesn’t mean your ads aren’t working. It means the platform believes it could make your ads “better” – usually by adding more automation or expanding your targeting. In many cases, these recommendations would actually undermine your campaign strategy or waste budget.

“Google will often say things like ‘poor performance’ to try to get you to opt into features that give it more control over the ads (and therefore, you and us less control). For example, it wants us to add a bunch of keywords that we know are irrelevant to our current goals. So it is trying to get us to spend more money to opt into those features. Please don’t put a lot of stock in those ratings – we are actively adjusting the ads to make sure they’re working as intended.”

Kalina Perkins, Media Buyer at Best Practice Media

In other words: a low score doesn’t mean failure. It means the platform disagrees with your strategy – often because it can’t monetize your efficiency.

 

Why Platforms Use These Scores

Let’s be blunt: these scores are part performance metric, part sales pitch. They serve to:

Encourage adoption of automated bidding or targeting features

Increase advertiser budgets under the guise of “improving” campaigns

Create a sense of urgency and dependency on the platform’s tools

Independent advertisers and PPC experts have noted that these “optimization” nudges often benefit the platform far more than the advertiser.

“They put up a ‘New recommendation’ that we PAY THEM MORE to raise our score back up… like a scare tactic.”

Advertiser on Reddit

 

What Actually Matters

At Best Practice Media, we focus on business performance, not arbitrary platform metrics. Here are the metrics that matter:

Conversions / Leads — Real-world actions and sales. We optimize to increase conversion volume and quality.

Cost Per Acquisition (CPA) — Measures efficiency. We work to lower CPA while maintaining lead quality.

Return on Ad Spend (ROAS) — Ties spend to revenue. We track every dollar spent vs. earned.

Conversion Rate (Post-Click) — Reflects funnel and landing page performance. We test creative, copy, and UX to improve this.

Trend Data — Reveals long-term performance. We monitor weekly and monthly improvements, not one-day fluctuations.

These metrics are tied directly to your growth – not to Google’s or Meta’s.

 

How We Handle ‘Poor’ Scores Behind the Scenes

When a campaign shows a low score, here’s what we actually do:

Investigate, not react. We treat scores as signals to review – not commands.

Analyze underlying factors. We look at CTR, relevance, and landing page experience to identify true areas of improvement.

Test intentionally. We may test platform suggestions – but only if they align with your goals and data.

Communicate transparently. If you see a low score, we’ll explain what it means and what we’re doing about it.

Stay focused on results. We prioritize performance metrics that drive your business – not vanity grades from an algorithm.

 

The Bottom Line

When you see a “Poor” score, it’s not a crisis – it’s just the platform asking for more control. Our job is to make sure you stay in control of your ad dollars and that every click drives real results.

A low platform score doesn’t mean poor performance. It means you’re optimizing for your business, not their bottom line.

Next Steps for Clients:
If you see a low rating and are concerned, reach out to your BPM media buyer. We’ll walk you through what it means in context and show you the metrics that actually matter.

Stop Optimizing Ads Too Soon: Why You Need to Let Meta Breathe

One of the most common ways advertisers waste money is not by overspending but by managing campaigns too aggressively. Too often, ads are scaled or cut based on sample sizes so small they have no real meaning. One click on five impressions may technically equal a 20% CTR, but that is not a signal worth acting on. In 2025, the most effective media buyers are the ones who know when to take action and when to step back.

1. What’s the Minimum Sample Size?

Industry best practices and Meta’s own guidance provide clear thresholds before performance data becomes reliable:

Impressions: At least 500–1,000 impressions per ad before CTR or CPC comparisons are useful.

Clicks: Around 50–100 clicks are needed before drawing directional insights about engagement or traffic quality.

Conversions: Meta recommends roughly 50 conversions per week per ad set for the algorithm to optimize effectively.

Anything smaller is unstable. A single click can double or halve your CTR. That is not actionable insight; it is variance.

2. Then vs. Now: The Evolution of Optimization

In the early years of Facebook advertising, the standard approach was to cut underperformers quickly and scale “winners” the moment they looked promising. Manual intervention was required to see results. Today, the environment has changed:

Meta’s algorithm identifies performance patterns more quickly and more accurately than a human media buyer.

Optimization happens at the campaign and ad set level, which means turning ads off too early can work against the system.

Frequent changes, such as pausing or editing, restart the learning phase and slow down progress.

The role of a modern media buyer is not to second-guess the algorithm, but to provide it with quality data and enough room to optimize effectively.

3. Applying Statistical Significance

When comparing ads, the real question is whether differences in performance are meaningful or simply the result of chance.

Confidence Level: In most digital marketing tests, the goal is 95% confidence. That means being 95% sure that the observed result is not random.

Sample Size: To reach that confidence level with CTR or CVR, dozens or even hundreds of clicks or conversions per variant are often required.

Margin of Error: With too few samples, the margin of error can be so wide that the numbers lose all practical value. A 20% CTR on five impressions could easily carry a margin of error of plus or minus 40 percentage points.

If you do not have enough data to calculate statistical significance, you do not have enough data to optimize.

4. Best Practices for Smarter Decision-Making

Budget for meaningful results. Campaigns aimed at conversions should be funded to achieve about 50 conversions per ad set per week. Minimal daily spend will not provide enough data.

Wait before making changes. Allow campaigns to run for at least 5–7 days to let volatility smooth out and to give the algorithm a chance to stabilize.

Use calculators when testing. Free tools, such as Evan Miller’s A/B Test Calculator, can quickly determine whether performance differences are statistically significant.

Prioritize big levers. Offer, audience, and creative strategy have a greater impact than making constant micro-adjustments to individual ads.

The Bottom Line

Many advertisers shut down ads after just a handful of impressions or rush to scale after one lucky click. These decisions reflect impatience rather than strategy. The true advantage now comes from knowing when to let campaigns run, when to trust Meta’s learning system, and when to act only once results are statistically reliable. Patience consistently outperforms panic, and ads that are given room to breathe almost always deliver stronger outcomes than those cut off too early.