Skip to content
Get a Quote
Watch Demo

What Schools Need To Know About Sexually Explicit Generative AI

Generative AI has come a long way in a relatively short amount of time. Platforms such as ChatGPT, Dall-E 2 and Copilot allow users to enter prompts to generate content, including images, videos, text, audio and even code. While many people are beginning to utilise generative AI in their day-to-day work, this technology has become a problem for schools. 

While this technology can be used in ways that complement learning, such as assisting with research and refining ideas, it can also be used in ways that are damaging to the wellbeing of other students, especially when it comes to sexually explicit generative AI. 

What is sexually explicit generative AI? 

Just like regular AI platforms, there are countless examples of adult AI platforms that allow users to input prompts to create sexually explicit imagery, videos and text-based content. 

Other types of explicit generative AI allow users to input a photo of an individual to generate an image of that person without clothing or allowing users to superimpose someone’s face on an existing explicit image. 

In August this year, the eSafety Commissioner reported that children are using explicit AI generated images to bully their peers, taking cyberbullying to a disturbing new level. 

 

What are the risks to children? 

The use of these kinds of AI images in cyberbullying can have an extreme and devastating impact on victims, including: 

  • Loss of self-esteem 
  • Feelings of shame 
  • Anxiety  
  • Depression 
  • Learning difficulties and inability to focus at school 
  • Self-harm or suicidal behaviour 
  • Difficulty sleeping 
  • Becoming isolated and withdrawn 
  • Feelings of violation

Unfortunately, it’s not just children using these sites. 

Predators are generating AI child sexual abuse images in such huge quantities that it is making it difficult for law enforcement to differentiate between AI and real child victims. 

In July this year, a 48-year-old Victorian man was jailed for 13 months after producing almost 800 AI generated child abuse images 

 

How can we stop this kind of activity? 

AI that depicts child abuse material is illegal and punishable by up to 15 years imprisonment. 

The Australian Federal Police and its partners are committed to stopping child exploitation and abuse and the Australian Centre to Counter Child Exploitation is helping to drive a collaborative national approach to combatting child abuse. 

The Australian federal government is also planning to implement legislation that would make it a criminal offence to create and share non-consensual AI generated sexually explicit images and videos.  

While this currently only applies to material depicting adults, with children being covered by child abuse material laws, it is good to see discussions around this material being discussed at a federal level.  

In the meantime, it is important for us as educators, carers and parents to understand how these platforms work, what they can be used for, and how the content can be used to bring harm to children. 

 

What is Saasyan doing? 

The safety and wellbeing of students is at the heart of everything we do. 

We have features built into our Assure online student safety software to help detect inappropriate AI usage, cyberbullying and grooming behaviours, sending alerts to a designated wellbeing contact if anything concerning is detected. 

Inspection of AI prompts: Assure monitors what students are inputting into AI platforms to detect sexually explicit or violent prompts. 

Safe image AI: Sexually explicit images are detected by Assure and relevant staff are notified by our wellbeing alert email. 

Chat, emails and drive content inspection: Assure’s integration with Microsoft365 and Google Workspace scans a student’s chats, emails and drive content for inappropriate images or text-based content.  

Natural language processing AI: this AI model analyses sentence word structure and can be used to detect cyberbullying. 

These are just some of the many ways we are helping to mitigate the risks. To learn more about how Assure can help safeguard your students digital safety, please contact us. 

 

Resources: 

Kids Helpline

eSafety Commissioner Generative AI Position Statement

UNICEF Generative AI: Risks and Opportunities for Children

 

Report abuse: 

You can report instances of inappropriate behaviour towards children online to the ACCCE on the link below. 

Australian Center to Counter Child Expoloitation