Skip to content
Get a Quote
Watch Demo

The Alarming Link Between AI-Generated Child Exploitation Material and Cyberbullying


Whether you like it or not, AI is well and truly making its way into everyday life. 

With AI no longer solely the domain of tech professionals, everyday people are using AI to write emails, create lesson plans and even generate recipes based on what’s left in the fridge. 

There is no denying that AI has amazing potential in both personal and professional spaces, whether it’s something as simple as saving time by automating tasks or as intricate as assisting in diagnosing medical conditions.  

Unfortunately, some forms of AI are not as helpful and pose significant risks to the safety and wellbeing of children. 

The office of eSafety commissioner, Julie Inman - Grant has received their first reports of children using AI to generate sexually explicit images of their classmates in order to bully them.

This has resulted in increased calls for AI companies to implement protections to reduce the chances of children encountering, generating, or being exploited to produce harmful content. 

What is Generative AI? 

Generative AI refers to types of AI that can create content such as videos, images, text and music, with examples of popular platforms being Chat GPT and DALL.E2, both created by Open AI. 

Open AI have strict safety standards with policies specifically stating that child sexual abuse material and other content that harms children is prohibited on their platforms.  

While it’s good that Open AI are leading the way in terms of what is and isn’t acceptable on their platforms, there will always be other platforms that don’t have as stringent an approach, making it all too easy for child exploitation material to be created with just a few clicks. 

How is Generative AI Being Used in Cyberbullying? 

There are generative AI platforms that specialise in the production of sexually explicit imagery, and it’s these platforms where kids are creating explicit images of other kids as a means of cyberbullying them. 

Kids have easy access to these platforms, with some of them only asking users to input their date of birth as a means of age verification, which can easily be manipulated. 

Not only is this kind of bullying incredibly distressing and traumatising to the victim, but could potentially land the perpetrator in serious legal trouble for possessing child abuse material, even if it is the product of generative AI. 

Current Australian national law states that it is illegal to make, share, request, access or possess images that show a person under 18 in a sexual way, including images, videos, drawings, cartoons and images that have been digitally altered to make a person look younger. 

AI generated images could easily fall into this definition, and while the use of AI in these situations is still a legal grey area, the perpetrator could still find themselves with a fine, criminal record, jail time or have their name on the Sex Offenders Register. 

How Do We Stop This from Happening? 

Educating kids is one of the best ways to discourage them from this kind of bullying. 

It is vital that parents and guardians have open and honest conversations with kids about the severe implications this form of bullying can have on victims and perpetrators alike, as well as the potential legal risks involved. 
In addition, to stop kids accessing these sites, Ms. Grant states that tech companies are equally responsible for making their products safe for the community, starting with stricter age verifications for certain sites that could put children at risk of abuse. 

Earlier this year there were recommendations put forward by the Commissioner regarding age verification for adult content, but with the federal government waiting for a new industry code to be developed before making further decisions, education is key. 

AI Can Be Used to Prevent Bullying 

It is important to note that AI isn’t in itself a bad thing. 

Here at Saasyan, we utilise many different kinds of AI in our Assure software that helps pick up on concerning behaviours. 

We use a combination of image object detection and safe image AI to detect sexually explicit images and images of weapons on school drives, as well as natural language processing and fuzzy logic, helping to pick up on bullying behaviours, self-harm intent and concerning Google searches. 

All of this combines to make sure alerts on concerning behaviours are sent to the right people at the right time, so they can intervene early and offer students help and guidance. 

The widespread, everyday use of AI is still relatively new, meaning there will undoubtably be more instances of cyberbullying and child exploitation we need to combat. 

We all have a role to play when it comes to keeping children safe online, and keeping ourselves educated on current trends and events puts us in a better place to educate and support the children and students in our care.