EXECUTIVE PERSPECTIVE

JG BEZUIDENHOUT

Chief Future Officer
Offernet

JG Bezuidenhout is a founding partner of the South African subsidiary of Offernet.net, the data technology company housed in London, United Kingdom. Based in Cape Town, JG is the global head of Offernet's advisory and innovation hub and is responsible for the monitoring and implementation of cutting-edge solutions, particularly within the digital marketing environment. Offernet's Ravenwatch service is the answer to a global problem. Adapted for highly regulated environments, Ravenwatch is a managed service that detects, monitors, reports, and removes digital threats, with no complex tech stack or legal fire drills required. If you’ve never audited your brand’s exposure to impersonation and fraud, now is the time.

 

Ravenwatch was created because brands were being abused at scale — and the response was too slow, too reactive, and too manual, says JG Bezuidenhout, a Co-founder of Offernet. Their clients needed a way to monitor their brand, detect threats, and take action, without spending millions on lawyers or relying on platforms that don't prioritise rapid enforcement.

From fake "Amazon clearance sales" to deepfake investment pitches featuring Elon Musk, Sir Richard Branson, and senior investment bank executives, Facebook and Instagram feeds are increasingly flooded with scam ads weaponizing the trust your brand has spent years building. And most brands are not responding — not because they don't care, but because they don't know where to start.

Your brand might already be compromised, and you wouldn't even know until it's too late. In today's digitally connected economy, that statement is no longer provocative. It's simply realistic. Welcome to the brand impersonation crisis — and the case for why every business needs cyber threat intelligence. What’s the Impact and the Antidote?

 

The Business of Brand Impersonation is Booming

Meta’s ad platforms have become a playground for scammers. According to whistleblower evidence reported by The Wall Street Journal, internal Meta documents revealed that 70% of new advertisers on certain ad types were promoting scams or misleading offers. This has become alarmingly visible. Scam ads impersonating major brands are widespread on Facebook and Instagram. Some examples from the past year:

  • Fake “80% off” clearance ads for big-name retailers, linking to clone websites that harvest payment details (e.g., Which? exposed fake Currys sites pushing heavily discounted electronics)
  • WhatsApp/SMS job scams impersonating employers and recruiters, offering “easy part-time income” in exchange for “admin fees” (UK recruiters and US firms have issued public alerts about impostor outreach on WhatsApp/Telegram)
  • AI-generated celebrity endorsement scams featuring Elon Musk, Sir Richard Branson, and Martin Lewis, pushing “AI trading bots,” crypto platforms, or get-rich-quick schemes (MoneySavingExpert has maintained ongoing warnings about fake Martin Lewis ads)
  • Cloned “investment” ads that funnel users into WhatsApp/Telegram “consultant” chats running sophisticated banking and brokerage fraud (the UK FCA continues to warn about “clone firms” impersonating authorized institutions).

This isn't limited to meta platforms. Spoofed domains, clone websites, and search ads are used in parallel to increase believability and reach. Because the ads are paid and targeted, they look legitimate. Victims often realise the truth only after they've lost money or shared sensitive data.

 

This Isn’t a Platform Problem Anymore. It’s a Leadership Problem

Meta’s official legal position — in the United States and elsewhere — is that it “does not owe a duty of care to users.” This defence is rooted in Section 230 of the U.S. Communications Decency Act, which broadly shields platforms from liability for third-party content, including paid ads. In practice, this means Meta cannot be easily held responsible for scam ads even when those ads cause financial harm, reputational damage, or impersonation.

In parallel, the Online Safety Act 2023 introduces duties on major platforms to tackle illegal content and paid-for fraud ads, with Ofcom developing detailed codes and guidance; enforcement is evolving, and scammers continue to exploit gaps.

The limits of platform liability have been tested in high-profile matters — from Andrew Forrest’s action in Australia to Martin Lewis v Facebook in the UK (which led to platform changes and funding for anti-scam initiatives). Platforms argue they cannot reasonably monitor or control every piece of content.

Meanwhile, enforcement capacity is finite. FTC actions, FBI IC3 reporting, Action Fraud, the NCA, and the City of London Police pursue large fraud rings — but transnational actors, burner ad accounts, and rapid re-registration make timely takedowns difficult. Much of the response remains after the fact.

Which brings us to the uncomfortable truth for brand owners… If your brand is being used to defraud people — and you’re not actively monitoring or responding — could that expose your company to reputational damage, consumer trust erosion, or even legal vulnerability?

There is no universal statutory duty forcing trademark holders to police every instance of online impersonation. However, prolonged failure to act can amount to tacit tolerance, weakening your ability to enforce your trademark against future infringers, and in some circumstances can bolster laches, acquiescence, or estoppel arguments that limit injunctive relief.

And in the court of public opinion? Silence is often interpreted as indifference. Customers don't distinguish between your brand and a scammer using your logo — they simply ask: “Why didn't you do anything?"

 

Brand Impersonation is a Marketing, Compliance, and C-Suite Problem

This is not just a digital marketing issue:

  • Marketing teams are watching their legitimate campaigns compete with scam lookalikes, diminishing ROAS
  • Legal and Compliance teams face exposure when customers are harmed by impersonators using company trademarks
  • Risk and IT teams must account for fraud vectors beyond traditional cyber threats

And the C-suite owns brand equity. If your brand becomes “known for scams,” it erodes long-term value. Credible analyses show record fraud losses: the FBI’s IC3 reported $16.6 billion in cybercrime losses in 2024, with investment fraud (crypto) leading total losses; UK Finance reports over £1 billion stolen annually through banking fraud and scams, with a substantial share originating online.

 

Who's Liable When Scammers Use Your Brand?

Legally, the primary liability for brand-impersonation scams lies with the scammer, they are committing fraud. This conduct can attract criminal charges and civil claims by victims. However, questions of secondary liability are increasingly raised. Platforms hosting scam content have been criticised for not acting in good faith (e.g., regulator and private actions over fake celebrity investment ads). While platform liability is not entirely settled, pressure to clamp down on fraud intensifies. The bottom line is that brands themselves are victims, but doing nothing carries its own risks.

 

Trademark Risks and Unintended Consequences (Under Trademark Law)

Local brand owners are starting to ask tough questions about the consequences of doing nothing. Under trademark statutes (e.g., the Lanham Act; the Trade Marks Act 1994) and common-law passing off, not addressing impersonation can backfire. Some key concerns include:

 

Trademark Enforceability

Courts frown upon lengthy toleration. Failure to act against impersonators can weaken your ability to enforce later. Over time, delay and silence may support laches, acquiescence, or estoppel and, in practice, can limit or even defeat injunctive relief, leaving you with slower, costlier damages claims. There's another risk: if impostors freely use your name, the mark's distinctiveness and goodwill can erode, pushing you toward dilution (and in extreme cases, genericide). The nightmare scenario: a brand that becomes less source-identifying, harder to enforce, and easier for rivals (and scammers) to mimic. In short, failing to police your trademark today could leave you with a diluted or defenceless brand tomorrow.

 

Reputational Harm and Consumer Trust

Businesses are at real risk of reputational harm or lost consumer trust if scams in their name go unchecked. If scammers repeatedly abuse your brand, customers may start doubting the brand itself. Doing nothing emboldens scammers; it signals to consumers that the brand isn't safeguarding its name. High-profile incidents, from fake retailer “closing-down” ads driving to new-registered clone sites, to celebrity deepfake endorsements, have forced companies to issue public warnings, coordinate legal takedowns, and invest in customer education to prevent churn and protect brand equity. Cleaning up the mess is costly; teams must chase removals across platforms and hosts, as well as reassure customers who were targeted or defrauded.

 

Duty to Protect Brand Equity (Fiduciary Responsibility)

No statute forces a business to sue every impostor, but brands have a strong duty to protect trademarks and goodwill as core corporate assets. Directors are expected to act in the best interests of the company, which includes safeguarding valuable intangibles. Good brand governance today means brand policing: monitoring social media, domain registrations, marketplaces, and search ads for imposters. When a scam is discovered, acting swiftly protects the public and demonstrates that the brand does not tolerate infringement. This isn't just a legal tactic; it's responsible brand stewardship.

Pursuing scammers can be challenging, they're often anonymous or offshore, but the risks of doing nothing can be devastating. While the scammers bear the legal blame, brand owners who turn a blind eye also pay the price, with diminished rights and reputational damage. Vigilant brand protection — through legal action, public education, and collaboration with platforms/regulators — is fast becoming a necessary part of doing business.