Trust is deteriorating. We’re rebuilding it.
The problem
Trust is breaking
Online reviews shape trillions of dollars in spending a year. But as manipulation grows and AI agents take over local discovery, the world needs an independent signal for what’s real.
Today, 99% of consumers use reviews to make buying decisions, yet only 48% actually trust them — a massive drop from 80% just a few years ago.
That’s why the TrueReview Certificate exists: to give consumers and AI agents a simple, undeniable signal that this business’s reviews are real.
use reviews to make buying decisions
currently trust reviews — down from 80% in 2020
Premise 01 — Trust is eroding
As manipulation grows and AI agents take over local discovery, the world needs an independent signal for what’s real. It is estimated that 15–20% of all online reviews are fake or manipulated. (The Transparency Company)
Premise 02 — The gap is widening
What consumers want
Real people. Real experiences. Specific details that only a genuine customer would know.
Owner remembered my dog’s name from last visit. Came back for the haircut alone.
Honest about wait time, fair on the bill. Will be back.
They fixed an issue without me having to ask twice. Rare these days.
What consumers increasingly get
Templated language. Anonymous profiles. AI-generated praise that says everything and means nothing.
Best service in town! Highly recommend! Will return!
Best service in town! Highly recommend!
Premise 03 — Cheating pays
The economics of manipulation
Without intervention, the market naturally rewards those who manufacture trust over those who earn it.
Fake reviews directly drive revenue. Research shows that a one-star increase in a business’s rating can lead to a 5–9% increase in revenue (FTC, 2011). When the financial upside of manipulation is this high, bad actors are heavily incentivized to cheat the system.
The global cost of this manipulation is staggering. Fake online reviews are estimated to influence $152 billion in global spending annually (WEF, 2021). The problem will not self-correct as long as the economic incentives remain misaligned.
The Reaction
Regulators have taken notice
The FTC Trade Regulation Rule on Consumer Reviews and Testimonials (16 CFR Part 465) formally acknowledges what honest businesses have known for years: review manipulation isn’t just a nuisance for consumers — it’s market distortion that punishes the businesses doing things the right way.
The UK’s Competition and Markets Authority has followed suit with similar enforcement powers, and regulators across the EU and Australia are moving in the same direction. Industry consortiums are forming to set shared standards, and the largest marketplace platforms are pouring resources into trust and safety teams.
The consensus is clear: review authenticity is no longer a fringe concern — it’s becoming a regulated category.
Federal Trade Commission
Doc. No. 16 CFR §465 · 2024
Trade Regulation Rule
Consumer Reviews & Testimonials
Categories of Risk
- (a) Fake reviews
- (b) Undisclosed incentives
- (c) Insider reviews
- (d) Misleading review sites
- (e) Review suppression
The Counterpoint
And yet the problem is getting worse.
policy-violating reviews removed in 2025
YoY increase
more than the year before
Despite new regulations and platform investments, the sheer volume of manipulation is overwhelming. The scale of the problem continues to outpace the platforms’ ability to moderate it, leaving consumers vulnerable and honest businesses at a disadvantage.
And those are only the ones they caught.
The Undeniable Trend
AI will amplify the problem
Local discovery has moved from traditional search to AI-driven recommendations. Soon, AI assistants will also book businesses on behalf of consumers — using reviews as their primary input for automated decisions.
If the reviews feeding these systems can’t be trusted, the decisions they make can’t be trusted either.
The Forecast
Garbage in, garbage out.
of local discovery via AI by 2027
YoY in policy-violating reviews
If the reviews feeding these systems can’t be trusted, the decisions they make can’t be trusted either.
Honest businesses continue to lose. Consumers continue to be misled. And the era of agentic commerce — with all its potential — gets built on a corrupt foundation.
And every honest business pays the price.
The diagnosis
Existing solutions are failing
Everyone agrees fake reviews are a problem. Nobody has fixed it.
Marketplaces and platforms have a commercial conflict.
Review platforms profit from engagement and growth. Aggressive fraud enforcement risks removing real-looking reviews, alienating businesses, and shrinking the numbers that drive their ad revenue. The economically rational choice is to invest just enough in moderation to avoid headlines — and pour the rest into growth. Section 230 limits their legal exposure for user-generated content, removing another incentive to act decisively.
Federal enforcement doesn't scale.
Large scale enforcement must happen at the state-level, but state laws have not been drafted to address fake reviews or AI driven fraud. Furthermore, enforcement is reactive — it chases misconduct after the fact, case by case, at a pace that can't match the speed or scale of review fraud.
Disinformation is supercharged by AI.
AI will continue to improve to the point where humans will not be able to tell what is real or not. Reactively trying to remove and fight fake content is a fruitless endeavor.
Businesses can't self-certify.
A business claiming its own reviews are authentic is not a proof point — it's marketing. Without independent verification, the claim is meaningless to consumers and unverifiable by agents.
Our core belief
Opt-in elevation over content moderation.
You cannot moderate your way out of fake reviews. So we changed the question.
Moderation alone cannot win.
You cannot moderate your way out of the fake reviews problem. There are too many reviews, too many bad actors, and AI is making creating fake content cheaper and faster to produce than any moderation system can match. Policing at scale won't work.
Change the incentives, not the policing.
If honest businesses get rewarded — elevated in search, trusted by agents, chosen by consumers — and businesses with manipulated reviews don't, the rational move shifts. Cheating stops being worth it not because you got caught, but because honesty wins more.
Reward honest behavior, don't punish bad actors.
TrueReview doesn't exist to punish bad actors — it exists to reward honest ones. The more consumers and agents learn to look for the badge, the more valuable it becomes — and the less valuable it becomes to game a system.
Independence is structural, not optional.
Every business rationally underinvests in fixing this alone, but the collective outcome is a broken marketplace. The right structure is a third party. Independent. Public methodology. No commercial relationship with the platforms being monitored. No financial incentive to pass businesses that should fail.
It’s not possible to catch every cheater. It is possible to make cheating economically irrational.
Our mission
Help truth win in agentic search.
We will elevate honest businesses in the age of agentic commerce by creating an opt-in review authenticity certification and monitoring program to independently verify which businesses are not manipulating their reviews.
The TrueReview Certification directory is the only place where consumers and agents can go to confirm they can trust a business’s online reviews.
TrueReview exists so consumers and AI agents have a way to ensure the reviews they’re making decisions based on are real, and so Certified businesses get credit for honestly earning their reputation the right way.
We exist to make honesty a competitive advantage — for businesses, for consumers, and for the AI agents increasingly making decisions on both their behalf.
Consumers
Clearer signal
Businesses
Earned credit
AI agents
Reliable input
See the methodology
The full audit process — what we measure, how decisions are made, and how we re-certify daily.
How it works