Blog Size - Image (4)

When You Have A Single Reputation Score, It’s Hiding A Lot

Corporate Reputation01 Apr, 2026

When an Australian bank ran RepTrak's AI as a Stakeholder audit, the result was hard to ignore. Their RepTrak score with the Informed General Public sat at 66.3. Their AI reputation score was 43.6, a gap of more than 22 points, enough to move from "average" to "weak" on the RepTrak scale. And it wasn't an isolated finding: across the four major banks in the study, every institution scored meaningfully lower with AI than with human audiences.

The instinct for most communications teams is to question the AI. But that instinct leads to the wrong place. The more productive question is: what is AI actually reflecting, and why?

Reputation Has Never Been One Number

The gap between an AI score and an IGP score is surprising partly because we've grown accustomed to treating reputation as a single, unified metric. In practice, it never has been. Reputation is the aggregate of how different stakeholder groups perceive a company, and those groups don't see the same things, weight the same drivers, or access the same information.

Employees prioritize Workplace and Conduct. Customers weight Products and Services more heavily. Investors focus on Performance and Governance. The Informed General Public sits somewhere across all of it. A single score averages those views into something useful for tracking, but it can also obscure the divergences underneath.

AI doesn't average in the same way. It draws from whatever is most present, most reinforced, and most structured in the available content ecosystem. That means it will systematically overweight what has been written about a company, particularly in earned media, regulatory filings, and high-authority sources, and underweight what hasn't been articulated in those places. When an AI score diverges from an IGP score, it's often because the content ecosystem is telling a different story than the general population is holding.

That's a stakeholder problem, not an AI problem.

The Hidden Cost of Managing to the Average

Many organizations believe they are managing stakeholders. What they are actually managing, in many cases, is averages.

A primary reputation score gets established. Stakeholder cuts get layered on top. But those cuts are often proxies: employees blended into broader sentiment, investors inferred from general data, small sample sizes for specialized audiences. It produces a picture that feels complete but leaves real tensions invisible. When everyone is folded into the same measurement, the divergences between groups get smoothed away before anyone has a chance to act on them.

AI can't be managed that way. It doesn't sit inside your normal data collection. It has its own sources, its own weighting, its own logic, and it produces its own score. You can't blend it into an average and call it done. That separateness is exactly what makes it useful: it forces the question that averaged data rarely does, which is what any single audience actually thinks, on its own terms.

What Closing the Gap Actually Looks Like

This isn’t just a good approach to managing AI reputation. It’s a good approach to managing reputation overall.

The companies that measure each stakeholder group directly, rather than inferring them from aggregated data, tend to find two things: a clearer picture of what's actually driving reputation, and more precise options for doing something about it.

One global insurer identified a meaningful disconnect between how employees and the general public perceived the company. Rather than treating this as a communications problem, they addressed it as a culture and alignment problem, building internal messaging that matched external positioning and activating employees as ambassadors for the brand. The result was more consistent perception across stakeholder groups and fewer conflicting signals for any audience to encounter.

A similar pattern played out for one company that treated employee experience as a primary investment. Improvements in how employees felt about the company cascaded outward into customer perception. The underlying logic is straightforward: employees are the most direct bridge between internal culture and external reputation, and strong internal signals tend to produce stronger external ones.

The third pattern is perhaps the most instructive for AI specifically. One company struggling with investor perception had been measuring that audience as part of a broader stakeholder average rather than directly. When they measured investors as a distinct group and tailored messaging accordingly, they found a clearer picture of what was actually driving skepticism and were able to address it with more targeted content.

The Right Question to Ask

The practical shift here isn't complicated, but it does require discipline. Before asking what a reputation score means, it's worth asking which stakeholders are driving it and how consistently their views are being shaped. Before concluding that an AI output is inaccurate, it's worth asking what content that AI was trained on, and whether it represents the full story.

AI isn't distorting your reputation. It's showing you where it's already inconsistent, just through a lens that most companies haven't been designed to look through yet.


Related Blog Post stories