Identity Threat Defense

4 Fake Faces: How GenAI Is Transforming Social Engineering 

Share with your network!

Email is the biggest threat vector. But increasingly, we see the need to include social media, text messaging and voice calls in cybersecurity education. Why? Because generative AI (GenAI) is transforming social engineering and threat actors are using it to fake people or personalities.  

Attackers can use GenAI to create images, text, audio and video and drive powerful scams to steal personal, financial or sensitive data. They can automate their operations and increase the likelihood of success. And they can scale and distribute attacks through an array of channels, like messaging apps, social media platforms, phone calls and, of course, email. 

Research for the latest State of the Phish report from Proofpoint found that 58% of people who took a risky action in 2023 believed their behavior put them at risk. That leads us to a critical question: When you receive a message—IM, DM, VM or email—can you be 100% confident that the sender is who they claim to be? Not in a world where attackers use GenAI. 

In this post, we look at four ways that threat actors use this powerful technology to deceive people. 

  1. Convincing conversational scams 
  2. Realistic deepfake content 
  3. Personalized business email compromise (BEC) attacks 
  4. Automated fake profiles and posts  

1: Convincing conversational scams 

Threat actors use GenAI to create highly convincing conversational scams that mimic human interactions. Natural language processing (NLP) models help them generate personalized messages. Popular NLP models include recurrent neural network (RNN), transformer-based (like GPT-3) and sequence-to-sequence. 

While the lures attackers use will vary, they all aim to start a conversation with the recipient and earn their trust. Threat actors might interact with a target for weeks or months to build a relationship with the goal of convincing that person to send money, invest in a fake crypto platform, share financial information or take some other action.  

How does it work?  

Threat actors collect large datasets of text-based conversations from sources like social media, messaging apps, chat logs, data breaches and customer service interactions. They use the datasets to train NLP models to understand and generate human-like text based on input prompts. The models learn to recognize patterns, understand context and generate responses that sound natural.  

Once they have trained an NLP model, threat actors might use the model to generate text-based messages for scamming their targets. The conversations can mimic specific personas or language patterns, or generate responses tailored to common scam scenarios. This makes it hard for people to distinguish between legitimate and fake communications on social platforms like Instagram, messaging apps like WhatsApp and dating websites like Tinder. 

How Proofpoint can help you address this risk 

Proofpoint Security Awareness delivers timely educational content about threat trends. This includes a two-week campaign with training like our “Attack Spotlight: Conversational Scams,” which helps users learn to recognize and avoid these scams. 

2: Realistic deepfake content 

Threat actors can use GenAI to create deepfakes that falsely depict people saying or doing things they never did. Attackers use advanced machine learning (ML) models to create highly realistic fake content that resembles a person’s appearance, voice or mannerisms.  

How does it work?  

Threat actors will gather a dataset of images, audio recordings or videos that feature the person whose likeness they want to mimic. They use the dataset to train the GenAI model to create fake content like images or videos. The model can evaluate the authenticity of its own content.  

One popular model is the Generative Adversarial Network (GAN). It can progressively produce more convincing deepfakes by refining and optimizing its methods. For example, it can adjust model parameters, increase the training dataset or fine-tune the training process. To further enhance the realism of the deepfake, attackers might use post-processing techniques like adjusting lighting and color or adding subtle imperfections.  

Deepfakes can be powerful tools in malicious scenarios. For instance, attackers can use voice deepfakes to impersonate loved ones. One example is the popular phone scam—aka “Hi mom”—where a bad actor pretends to be a person’s child or grandchild requesting money for an emergency.  

Much like a fingerprint, a voice footprint is unique to each person. Threat actors can scrape the biometrics from a small sample of media such as voicemails, recorded videos and podcasts to imitate a person’s voice with GenAI.  

How Proofpoint can help you address this risk 

We have a full security awareness campaign about deepfake trends, including bite-sized training videos, an interactive game and a “Top 3 Things” article. We also issue timely Threat Alerts when Proofpoint analysts detect an alarming deepfake attack, such as a recent Q1 financial scam in Hong Kong.   

3: Personalized BEC attacks 

In our 2024 State of the Phish report, we noted that GenAI is likely helping threat actors with BEC attacks, especially in non-English-speaking countries. BEC attacks are on the rise in countries like Japan, Korea and the United Arab Emirates, which attackers have avoided in the past due to language barriers, cultural differences or the lack of visibility.  

BEC scams typically involve impersonation tactics. Threat actors will pretend to be trusted individuals, like company executives. They aim to trick employees into taking unsafe actions like transferring money or revealing sensitive data.  

Now, threat actors can use GenAI to enhance these attacks. They can, for example, write in many languages convincingly. They can also improve the quality of the messages while creating them faster.  

How does it work? 

GenAI learns patterns and relationships in a dataset of human-created content and corresponding labels. It then uses the learned patterns to generate new content. With GenAI, attackers can accelerate and automate the creation of socially engineered messages that are personalized for individual recipients—and are highly convincing.  

For instance, bad actors can use GenAI to write fake emails and texts that mimic the style, tone and signature of a spoofed individual. They can use the AI model to automate the creation of these phishing messages and quickly generate a large volume of them tailored to the targeted recipients. This makes it difficult to evaluate the authenticity of the messages.  

Deepfakes can play a role here, too. Threat actors might add validity to their fake requests by following up with spoofed voicemails, voice calls or video-based messages that appear to be a company executive giving instructions or authorizing transactions. 

How Proofpoint can help you address this risk 

We focus on BEC security awareness and offer hundreds of content pieces that range from interactive videos to live-action humor to one-minute animations. Proofpoint threat analysts inform our weekly Threat Alerts about BEC attacks in the wild, like payroll diversion and fraudulent invoices.  

4: Automated fake profiles and posts  

We discussed how GenAI generates realistic images, audio and videos of real individuals. Bad actors can also use AI models to create fake identities for impersonation on social media and news platforms. And they can use AI models to automate the creation of large volumes of fake accounts, articles, graphics, comments and posts.  

How does it work?  

With minimal training, GenAI models can analyze publicly available data and social media profiles and adapt that information for targeted use cases. The models can: 

  • Mimic the style and tone of legitimate communications from trusted sources 
  • Translate and localize text in multiple languages 
  • Handle repetitive tasks, like replying to messages  

Threat actors can use fake identities to align with their targets’ interests. For instance, they might automate the process of creating profiles on platforms like Facebook, Instagram, website forums and dating apps. They might also use AI models to quickly process data in social media conversations and automatically write compelling answers. Their goal is to establish a broad range of relationships through which they can exploit people’s vulnerabilities and trust.  

With AI, threat actors might carry out these malicious activities at scale. That increases their chances of success with objectives like spreading misinformation and influencing online discussions. 

How Proofpoint can help you address this risk 

Proofpoint Security Awareness delivers continuous training about social engineering along with timely analyst warnings about social media impersonation. Our material includes evergreen articles and videos about how to avoid and outsmart threat actors and weekly Threat Alerts about recent GitHub, LinkedIn and X attacks.  

Get our free kit! GenAI requires multifaceted education 

In all these scenarios, bad actors use GenAI to pretend—realistically and convincingly—that they are someone they are not. They manipulate human psychology with social engineering. And that leads people to take unsafe actions like clicking on malicious links, providing personal information or transferring funds.  

It is essential to approach security education with a critical mindset and to stay informed about emerging threats and countermeasures. Teaching users about the risks of GenAI requires a multifaceted approach to security awareness. Education is especially important for employees who handle financial transactions or sensitive data. 

At Proofpoint, we use our industry-leading threat intelligence to build security training, materials and phishing simulations around real-world threats. We provide education through an Adaptive Learning Framework, which is a continuous, progressive scale—from basic habits to advanced concepts. This approach helps ensure that people are trained on the topics and at the level of difficulty that is most appropriate for their needs. 

We cover the trending topics of deepfakes and conversation scams in our complimentary 2024 Cybersecurity Awareness Month kit, so be sure to check it out.