Skip to main content

News

Main Content

What if a fake video ruined your reputation—at school, online and in your own community? That’s the emerging reality for students across the country as deepfake videos generated by artificial intelligence become a new weapon in the bullying playbook. 

Deepfake videos leverage advanced AI to create synthetic content indistinguishable from genuine footage. Unlike traditional forms of bullying—limited to direct interactions or written messages—AI-generated videos fabricate highly convincing visual "evidence" of individuals participating in harmful, embarrassing or illegal activities. 

Sergio Alexander headshot
Sergio Alexander, Educational Studies Ph.D. Student

Enter educational leadership Ph.D. student Sergio Alexander, capeless but committed—ready to take on digital bullies. When Alexander began his doctoral studies at TCU, he didn’t expect to become an early voice in one of the most pressing digital threats facing K–12 schools: deepfake cyberbullying. But his scholarship is doing exactly that.

“We’re entering an era where students aren’t just getting bullied in hallways or group chats,” Alexander said. “They’re being targeted by synthetic content that can manipulate their identities, humiliate them publicly and leave lasting psychological scars—all without ever being in the same room.” 

The digital deception dilemma
Alexander’s article, published in The Clearing House, explores how AI-generated deepfakes are being used to falsely depict students in compromising situations, leading to social ostracism, emotional harm and in some cases, school discipline based on fabricated evidence.  

Deepfakes don't just spread rumors—they create seemingly irrefutable evidence that leaves students defenseless. The realistic nature of this content means viewers trust their eyes, even though what they're seeing is completely false. 

His pioneering scholarship examines the psychological, educational, technological and legal complexities of deepfake cyberbullying in K–12 settings. 

“There’s a significant policy gap,” Alexander noted. “Most schools are unprepared. They have policies for phone use or cyberbullying but few have language that acknowledges the existence of AI-generated media or outlines steps for handling it.” 

illustration of person on cell phone lookign at social media icons Instagram, Facebook, Snapchat and more.

Behind the scholarship
Alexander—a former classroom teacher—witnessed firsthand how traditional bullying impacted students. As deepfake tools became more accessible, he began asking urgent questions: What happens when fake media becomes indistinguishable from real content? How do schools prepare for this new frontier of harassment? 

His published article, “Deepfake Cyberbullying: The Psychological Toll on Students and Institutional Challenges of AI-Driven Harassment,” investigates how schools are grappling with this insidious threat—and what they must do next to support students. 

“Deepfake technology, once primarily associated with political disinformation and entertainment, is now being weaponized in schools as a new and insidious form of cyberbullying, posing significant risks to student safety and well-being.” 

The article outlines both the psychological trauma inflicted on victims and the urgent need for schools to update their policies, training and support systems. 

The rise and impact of deepfake bullying 
Unlike traditional bullying, deepfake videos rely on hyper-realistic digital manipulation that can depict students in inappropriate or harmful scenarios. Victims suffer humiliation, anxiety, depression and social isolation, and often face recurring traumatization as content resurfaces.  

Deepfake videos differ dramatically from traditional bullying because they can instantly reach millions of people through social media platforms. Once posted, harmful content can become viral within hours or even minutes. 

“The accessibility, believability, rapid dissemination and enduring presence of deepfake content create a potent combination of harm that requires a comprehensive response.” 

Challenges of deepfake bullying in schools 

  • Policy creation: Existing cyberbullying policies rarely account for the complexities of AI-generated content. Policies must clearly define deepfake bullying and outline prevention, detection and response strategies. 
  • Detection: Deepfakes are notoriously hard to identify. Detection tools require financial and technical resources that many schools lack. 
  • Swift dissemination: Viral content spreads faster than most schools can respond. Victims face repeat exposure, amplifying harm. 
  • Training gaps: Teachers and staff often lack awareness or technical training to recognize and respond to deepfakes early. 
  • Legal gray area: Schools also face legal ambiguity when incidents originate off-campus or outside of instructional hours, leaving administrators uncertain about how to intervene. 

“Many schools aren't sure whether they can legally address deepfake incidents if they're created off-campus, even though their impact is devastatingly felt within school walls.”  

Addressing the crisis
A multilayered strategy rooted in equity and empathy.

Schools must allocate resources to peer-support initiatives and partnerships with mental health professionals to provide targeted interventions for victims of deepfake bullying. Establishing safe reporting mechanisms can help students seek assistance without fear of retaliation. 

Incorporating curricula that teach students to critically evaluate digital content, recognize manipulation and understand the ethical implications of AI technologies is vital. Practical exercises, such as identifying real versus fake media, can empower students to navigate the digital landscape responsibly and reduce the impact of misinformation. 

Schools need updated policies that go beyond traditional cyberbullying frameworks to specifically address deepfake-related incidents. This includes prevention strategies, detection tools and collaboration with law enforcement and technology providers to ensure swift removal of harmful content. 

Training educators to recognize the signs of deepfake bullying and support affected students is equally critical. Comprehensive professional development programs should include modules on: 

  • Recognizing deepfake content 
  • Trauma-informed responses 
  • Media ethics and digital literacy 

According to Alexander, media and digital literacy integration should be a core component of school curricula, preparing students to critically assess and ethically engage with digital content. 

The response to this complex issue requires more than just punitive measures; it demands a multilayered approach involving education, digital literacy and policy reform. 

What’s next?
According to Alexander, it’s no longer optional for educators to consider digital threats, media literacy integration and policy innovation—it’s essential. He hopes to expose the severity of deepfake bullying and equip communities with practical, research-backed strategies to safeguard educational environments and support students. 

Read the full article