The Tasalli
Select Language
search
BREAKING NEWS
AI Face Model Scams Use Real People Now
AI

AI Face Model Scams Use Real People Now

AI
Editorial
schedule 5 min
    728 x 90 Header Slot

    Summary

    A new trend on the messaging app Telegram shows that scammers are hiring real people to help carry out AI-driven fraud. Job listings for "AI face models" have appeared in dozens of online channels, seeking mostly women to appear on camera. These models use special software to change their appearance in real-time while talking to victims. By using a human face combined with AI technology, criminals are finding it easier to trick people into sending money or sharing private information.

    Main Impact

    The biggest impact of this development is the loss of trust in video communication. For a long time, people believed that seeing someone on a live video call meant the person was real and honest. Now, scammers are using "human-in-the-loop" tactics, where a real person provides the movement and voice while AI provides a fake face. This makes digital scams much more convincing and harder for the average person to detect, leading to higher financial losses for victims worldwide.

    Key Details

    What Happened

    Investigations into Telegram channels have uncovered a growing market for people willing to act as the face of a scam. Criminal groups post ads looking for models who are comfortable being on camera for long hours. Once hired, these models use "deepfake" software. This technology maps a different face onto the model's head in real-time. When the model smiles, speaks, or moves, the AI-generated face does the same. This allows a scammer to look like a beautiful woman, a trusted businessman, or even a specific person the victim knows.

    Important Numbers and Facts

    The scale of these operations is surprisingly large. Some job listings require models to handle up to 100 video calls per day. These calls are often short, designed to "prove" to a victim that the person they are chatting with is real. The models are usually paid a flat fee or a small commission based on how much money they help steal. Dozens of these recruitment channels exist, some with thousands of members, showing that this is not just a small problem but a structured industry.

    Background and Context

    In the past, online scammers mostly used stolen photos to create fake profiles. This is often called "catfishing." However, as people became more aware of these tricks, they started asking for video proof. Scammers first tried using pre-recorded videos, but those were easy to spot because they did not react to what the victim was saying. The move to live AI face-swapping is the next step in this criminal evolution. It combines the social skills of a real human with the deceptive power of artificial intelligence. This is frequently used in "pig butchering" scams, where victims are groomed over weeks to invest in fake cryptocurrency schemes.

    Public or Industry Reaction

    Security experts and tech researchers are sounding the alarm about how easy these tools have become to use. While high-end AI used to require expensive computers, basic face-swapping software can now run on a standard laptop. Privacy advocates are concerned that apps like Telegram do not do enough to monitor these job boards. Many people in the tech industry are calling for better "liveness detection" tools. These are programs that can tell if a video feed has been altered by AI, but scammers are constantly finding ways to bypass these safeguards.

    What This Means Going Forward

    As this technology improves, the line between what is real and what is fake will continue to blur. We can expect to see these tactics used not just for money scams, but also for political misinformation or corporate spying. For the general public, this means a shift in how we interact with strangers online. Experts suggest that people should look for small glitches in video calls, such as strange shadows around the eyes or mouth, or hair that looks blurry. In the future, we may need to use "secret words" or secondary ways to verify that the person on the screen is actually who they claim to be.

    Final Take

    The rise of AI face models shows that technology is making old scams more dangerous than ever. While AI has many benefits, it is also giving criminals a powerful way to hide their true identities. Staying safe now requires more than just a strong password; it requires a healthy sense of doubt whenever a stranger asks for money or personal details over a video call. As the tools for deception get better, our ability to stay alert must keep pace.

    Frequently Asked Questions

    What is an AI face model?

    An AI face model is a person hired by scammers to sit in front of a camera. Using software, their real face is replaced with a fake one in real-time during video calls to trick victims.

    How can I tell if a video call is a deepfake?

    Look for unnatural movements, such as blinking that looks strange or skin that looks too smooth. Sometimes the edges of the face will flicker if the person moves their hand in front of their chin or turns their head quickly.

    Why do scammers use Telegram for these jobs?

    Telegram offers a high level of privacy and less moderation than other social media platforms. This makes it a popular place for criminal groups to communicate and recruit workers without being easily caught.

    Share Article

    Spread this news!