The Use of AI Deepfakes in Cyberbullying

October 18, 2024

In 2023, we feared that generative AI would impact students’ academic honesty; in 2024, those fears have quickly escalated to their digital safety and wellbeing.

With educators focused on how to prevent students from using AI to cheat, many were taken by surprise when a news story in March of 2023 revealed a far darker side of generative AI: a group of high school students had created a “deepfake” video of a middle school principal, in which he was shown shouting offensive slurs and behaving aggressively.

In fact, he had never done anything of the kind. While the footage looked real, it was a synthetic—in other words, entirely fake—video.

This, unfortunately, was to be the first of several incidents around the country in which students used AI deepfakes (augmented photos, videos, or audio clips) to harass, bully, and violate the privacy of their peers and school staff. 

Let's talk about steps that K-12 leaders can take to address the use of AI for cyberbullying. But before exploring what your district can do about this risk, we need to have a clearer picture of what's going on today.

The current landscape of generative AI

News stories have a tendency to overemphasize the extremes. What's truly happening on the ground and how much of an issue is generative AI? Here’s what the current research tells us:

Generative AI is being connected to CSAM

Research shows that generative AI is being used to produce Child Sexual Abuse Material (CSAM) at alarming rates. In 2023, the National Center for Missing and Exploited Children (NCMEC) received 4,700 reports of CSAM that involved generative AI.

Serious risks arise when children are featured in sexually explicit imagery or videos. Content of this nature has the same harmful effects, and is no less illegal, whether it's 'real' or AI-generated.

Students are more likely to employ AI for academics

When we ask youth directly about how they and their peers use generative AI, some hopeful findings emerge. In a recent Thorn report, 80% of teen respondents said they don’t have friends or classmates who have ever used AI tools to generate nudes of other children. 

For the majority (53%) of teens and tweens, getting help with homework is the primary reason they have used AI in 2024, according to a Common Sense report.

However, students are worried about deepfakes and cyberbullying

Despite the hopeful picture painted by these numbers, the same report revealed that most students (62%) are worried about the potential for generative AI to be used for bullying—as are parents.

While there have ‘only’ been a handful of high-profile incidents where deepfake technology was used for harassment, even a handful is too many. These incidents are deeply unsettling, and highlight just how accessible this technology is for young people.

How safe are your students online?

Get a free Student Safety Audit with Linewize Monitor. See which online safety categories are most prominent and get alerted to at-risk students who need timely help.

Get a Student Safety Audit

Deepfake technology is becoming more accessible

In 2024, AI-supported search engines and chatbots were the most-often used tools by youth aged 13-18 (followed by image and video generators in third and fourth place, respectively). 

It’s not difficult for young people to find generative AI tools. Many of them are advertised on social media platforms, despite violating those platforms’ Terms of Use. It only takes a simple browser search to encounter posts from platforms like Reddit, which name (and even rank in ‘quality’) these nudify tools.

In a recent lawsuit, the office of the San Francisco City Attorney revealed that a list of 16 “nudify” websites were visited more than 200 million times between January and June 2024.

Without regulations in place, the number of students flocking to generative AI will continue to grow. After all, it took ChatGPT only two months to amass a monthly active user base of 100 million (a milestone that took TikTok nine months and Instagram 2.5 years to reach).

Schools are trending away from banning AI

After rushing to ban generative AI tools early in 2023, many districts have since changed course, with those enforcing complete bans now the minority. In a December 2023 survey, only 7% of teachers and school leaders said their districts ban the use of generative AI.

Rather than ban AI, the majority of districts are now seeking to provide their communities with support and guidance to utilize AI tools in practical and healthy ways. This means first finding strategies to combat the risks of generative AI, such as deepfake cyberbullying.

How can districts combat AI-enabled cyberbullying?

Here are actionable steps that K-12 leaders can take this school year to mitigate the misuse of AI and provide your students and families with support.

1) Educate your school community

We know that parental engagement has a positive effect on a child's wellbeing, both off and online. Encourage conversations in your school community about generative AI and cyberbullying, with a particular focus on:

  • Helping students recognize what constitutes bullying, and clearly defining what can and cannot be considered “joking” or “teasing”
  • Clarifying consensual vs. non-consensual sharing of information
  • Informing students and their families that the possession of CSAM—even when it is self-generated—is illegal
  • Actively and frequently involving parents in conversations around online safety

2) Develop an AI use policy

Common Sense reports that almost 60% of teenagers either attend a school that has no rules around the use of generative AI, or they are unsure if their school has any rules. Developing an AI policy is a must for K-12 school districts.

Your district's AI policy should:

  • Clarify the types of generative AI that can and cannot be used
  • Agree on consequences for misuse 
  • Be communicated clearly and frequently to your whole school community

3) Consider consequences with care

As you decide on the disciplinary measures for breaches of your school's AI policy, consider the following key insights:

  • Equity: Children in minority groups are more likely to use generative AI. Black teens, in particular, are more than twice as likely as their Latino and white peers to have their work flagged for being AI generated when it isn’t.
  • Special needs: Students with special education needs, who tend to use generative AI more often, are at risk of being disciplined more frequently
  • Support for victims: We often see a lack of supportive measures for those who have been victimized. Data from the 23/24 school year revealed as much, with only 36% of teachers reporting that their school provides adequate support for the victims of deepfake imagery abuse.

4) Leverage EdTech to help navigate AI risks in real time

With generative AI, K-12 IT teams must prevent students from accessing and creating inappropriate content online.

Evaluate your current EdTech solutions to ensure you're utilizing features and solutions that support your AI policy. For instance:

  • Adjust your filter rules to accommodate emerging AI tools
  • Consider the use of real-time image blurring for your filter to prevent students from viewing harmful AI-generated content
  • Seek solutions that proactively consider the role of teachers in filtering
  • Use a digital monitoring solution to flag cyberbullying behavior
  • Integrate parental control tools with your filtering solutions to help make parents active partners in keeping students safe online

In honor of Bullying Prevention Month, we hosted a live discussion exploring the use of AI for cyberbullying, and steps that districts can take to help combat it. You can watch the full webinar on demand here.

Learn how to combat deepfake cyberbullying

In this on-demand webinar, learn how generative AI is being weaponized in schools and how your district can take steps to address it.

Watch on demand


Topics: Cyberbullying, AI

Would you like some more information? Or a demo?
Get in touch
Subscribe to our newsletter

Recent posts

 
7 Standards for Digital Monitoring in US Education: A K-12 Leader’s Guide

The pressure on schools to manage the growing speed and scope of online student risks has reached a critical point. District and school ...

 
How to Recognize Grooming Online: A Guide for School Districts

Safeguarding today’s generation of young people means keeping one eye on their real-world behavior and the other on their online ...

 
How to Choose the Right K-12 Grant

As school budgets shrink and academic needs grow, grant funds are more critical now than ever. As you explore countless grants available ...

 
What Georgia Schools Need to Know About Bill 351: Protecting Georgia’s Children on Social Media Act

Georgia’s district IT leaders and administrators are preparing to meet the latest regulations in student online safety with the state's ...