October 18, 2024
In 2023, we feared that generative AI would impact students’ academic honesty; in 2024, those fears have quickly escalated to their digital safety and wellbeing.
With educators focused on how to prevent students from using AI to cheat, many were taken by surprise when a news story in March of 2023 revealed a far darker side of generative AI: a group of high school students had created a “deepfake” video of a middle school principal, in which he was shown shouting offensive slurs and behaving aggressively.
In fact, he had never done anything of the kind. While the footage looked real, it was a synthetic—in other words, entirely fake—video.
This, unfortunately, was to be the first of several incidents around the country in which students used AI deepfakes (augmented photos, videos, or audio clips) to harass, bully, and violate the privacy of their peers and school staff.
Let's talk about steps that K-12 leaders can take to address the use of AI for cyberbullying. But before exploring what your district can do about this risk, we need to have a clearer picture of what's going on today.
News stories have a tendency to overemphasize the extremes. What's truly happening on the ground and how much of an issue is generative AI? Here’s what the current research tells us:
Research shows that generative AI is being used to produce Child Sexual Abuse Material (CSAM) at alarming rates. In 2023, the National Center for Missing and Exploited Children (NCMEC) received 4,700 reports of CSAM that involved generative AI.
Serious risks arise when children are featured in sexually explicit imagery or videos. Content of this nature has the same harmful effects, and is no less illegal, whether it's 'real' or AI-generated.
When we ask youth directly about how they and their peers use generative AI, some hopeful findings emerge. In a recent Thorn report, 80% of teen respondents said they don’t have friends or classmates who have ever used AI tools to generate nudes of other children.
For the majority (53%) of teens and tweens, getting help with homework is the primary reason they have used AI in 2024, according to a Common Sense report.
Despite the hopeful picture painted by these numbers, the same report revealed that most students (62%) are worried about the potential for generative AI to be used for bullying—as are parents.
While there have ‘only’ been a handful of high-profile incidents where deepfake technology was used for harassment, even a handful is too many. These incidents are deeply unsettling, and highlight just how accessible this technology is for young people.
In 2024, AI-supported search engines and chatbots were the most-often used tools by youth aged 13-18 (followed by image and video generators in third and fourth place, respectively).
It’s not difficult for young people to find generative AI tools. Many of them are advertised on social media platforms, despite violating those platforms’ Terms of Use. It only takes a simple browser search to encounter posts from platforms like Reddit, which name (and even rank in ‘quality’) these nudify tools.
In a recent lawsuit, the office of the San Francisco City Attorney revealed that a list of 16 “nudify” websites were visited more than 200 million times between January and June 2024.
Without regulations in place, the number of students flocking to generative AI will continue to grow. After all, it took ChatGPT only two months to amass a monthly active user base of 100 million (a milestone that took TikTok nine months and Instagram 2.5 years to reach).
After rushing to ban generative AI tools early in 2023, many districts have since changed course, with those enforcing complete bans now the minority. In a December 2023 survey, only 7% of teachers and school leaders said their districts ban the use of generative AI.
Rather than ban AI, the majority of districts are now seeking to provide their communities with support and guidance to utilize AI tools in practical and healthy ways. This means first finding strategies to combat the risks of generative AI, such as deepfake cyberbullying.
Here are actionable steps that K-12 leaders can take this school year to mitigate the misuse of AI and provide your students and families with support.
We know that parental engagement has a positive effect on a child's wellbeing, both off and online. Encourage conversations in your school community about generative AI and cyberbullying, with a particular focus on:
Common Sense reports that almost 60% of teenagers either attend a school that has no rules around the use of generative AI, or they are unsure if their school has any rules. Developing an AI policy is a must for K-12 school districts.
Your district's AI policy should:
As you decide on the disciplinary measures for breaches of your school's AI policy, consider the following key insights:
With generative AI, K-12 IT teams must prevent students from accessing and creating inappropriate content online.
Evaluate your current EdTech solutions to ensure you're utilizing features and solutions that support your AI policy. For instance:
In honor of Bullying Prevention Month, we hosted a live discussion exploring the use of AI for cyberbullying, and steps that districts can take to help combat it. You can watch the full webinar on demand here.
Topics: Cyberbullying, AI
The pressure on schools to manage the growing speed and scope of online student risks has reached a critical point. District and school ...
Safeguarding today’s generation of young people means keeping one eye on their real-world behavior and the other on their online ...
As school budgets shrink and academic needs grow, grant funds are more critical now than ever. As you explore countless grants available ...
Georgia’s district IT leaders and administrators are preparing to meet the latest regulations in student online safety with the state's ...