AI Influencers Fooled Millions and Cashed In — What We Know So Far

AI Influencers Fooled Millions and Cashed In — What We Know So Far

You are currently viewing AI Influencers Fooled Millions and Cashed In — What We Know So Far
AI-generated influencers are increasingly blending into social media feeds, making it harder to tell what’s real and what’s not.

AI-generated influencers pose as trusted figures, including service members and healthcare workers, to gain trust and make money. Here’s what happened, how it works, and how to spot this scam early.

What Happened?

According to Military.com, a growing number of AI-generated accounts are posing as trusted figures online to gain followers and earn money.

One viral example was an Instagram account linked to a fake persona known as “Jessica Foster.” It posed as a female Army service member and gained over 1 million followers before people found out it was fake.

At first, nothing seemed unusual. The account showed a young woman in uniform, sharing photos and daily life content, and many people believed she was real.

But the persona was fake — AI generated the images, while a real person ran the account to attract attention.

Another case involved a fake influencer named “Emily Hart,” designed to appeal to conservative audiences in the U.S. The account was created by a 22-year-old medical student using the pseudonym “Sam,” and it was used to attract followers and generate income.

Researchers call this scam “digital stolen valor,” where people impersonate service members or other trusted roles to build trust and profit from it.

How This AI ‘Digital Stolen Valor’ Scam Worked

The person behind the account used AI tools, like image, video, and chat tools, to create a fake but realistic person and shape how she spoke.

Then they built a full identity around that fake person. They gave her a name, a background, and a consistent online presence. 

Next, they posted content that matched the identity:

  • photos in uniform
  • everyday life updates
  • personal thoughts and stories
  • opinions that connect with a specific audience

To attract more attention, the account mixed different types of content. That mix helped it grow faster. 

Over time, the account built a large following. People engaged with the posts and treated the persona as real. 

Then the account started directing followers to paid content. That is where the money came in.

Reports say the creator ran this as a side project to earn income, using AI to test what works and improve engagement. 

This case shows how the scam works in general: build trust first, grow an audience, and then turn attention into money through subscriptions, paid content, or other offers. 

This scam is becoming more common. Experts say AI is now often used to create fake identities that are hard to spot.

Why People Fall for It — and Why It’s So Hard to Spot

These accounts feel real, with natural-looking photos and stories that sound personal. Nothing feels rushed or suspicious.

People also trust certain roles right away. A military uniform or a healthcare setting creates instant credibility that feels familiar and safe.

Scammers know that. They build identities that people respect and post content that matches what their audience cares about. That creates a sense of connection.

Even if something feels slightly off, many people keep following because they like the content or agree with it.

What makes this harder today is technology. In the past, fake accounts often had obvious mistakes, like bad edits or missing details.

Now, AI can create images that look real. Small details are harder to spot, even for experienced users. Platforms require labels for AI content, but not everyone follows the rules.

Some fake accounts stay up long enough to grow large audiences before they get removed. By then, they may have already gained trust — and sometimes money.

Woman looking at her smartphone, reviewing social media content at home
Before you trust an account, check the details, profile history, and captions — these small signs can reveal what’s fake.

5 Simple Ways to Spot AI Content on Social Media

You don’t need special tools. Just watch for these signs:

  1. Look closely at small details in photos

Check hands, teeth, uniforms, and backgrounds. If something looks slightly off or inconsistent, it may be AI-generated.

  1. Check when the account was created

Open the profile by clicking or tapping the account name above a post. Then scroll through the posts. If they all started recently, but the account already has thousands of followers, that’s a red flag.

  1. See if every post looks “too perfect”

Real life is not polished all the time. If every photo looks staged or like a photoshoot, take a closer look, especially if the account claims to be a soldier, nurse, or everyday professional.

This is less unusual for fashion, modeling, or aesthetic accounts, but it can be a warning sign for accounts that are supposed to show real daily life.

  1. Read the captions for repetition

Watch for repeated phrases, vague wording, or posts that say a lot without sharing real details. If posts use the same phrases or feel generic, the content may be AI-written.

  1. Be cautious if the account pushes you to click or pay

If the account asks you to subscribe, click a link, or pay for content, stop and double-check before you act. Many scams start by pushing you to an outside website.

If you’ve already clicked a suspicious link, entered your personal details, and feel unsure, it’s important to check if your data was leaked.

With Futureproof, you can quickly see if your email appears in data leaks and get clear, simple steps to protect your accounts. It’s an easy way to stay ahead of most scams.

AI Impersonation Scams Build Your Trust First — Then Ask for Money

This type of scam focuses on building trust over time.

Scammers create believable identities, post regularly, and grow an audience over time. The goal is to seem real before asking for anything.

Once people trust the account, it becomes much easier to influence what they do — whether that means clicking a link, signing up, or paying.

To protect yourself, take a moment to look more closely at the account before trusting it. Review the profile, see how long it has been active, and pay attention to details that don’t quite match.

A small pause can be enough to spot something that doesn’t feel right and stay safe.