Skip to content
Get a Quote
Watch Demo

Ethical Use of AI, 24/7 Monitoring and Just-In-Time Reporting for Student Safety With Ally Eddy, Our Lady of Mercy College


In this episode of Saasyan's Wellbeing Wednesday Series, Ally Eddy, Director of ICT at Our Lady of Mercy College shares his perspective on ethical use of AI, 24/7 monitoring and just-in-time reporting for student safety.



As an IT leader, how would you describe the responsibilities you have when it comes to ensuring the online safety of your students?

I feel that you know we're providing technology, we need to make sure that everything's operating in that safe environment. It's not just, give out the technology and tell them to go play, but we're providing that environment.

We need to make sure it's as safe as possible for them and give them some guidance and keep them on the right path and hopefully they learn those skills as they progress through school, about what's good, what's bad and what are the right things to do.

And we also mandate and enforce that all school devices actually tunnel back through our network, back through our firewalls and therefore back through Saasyan reporting and that's 24/7.


How do you empower your non-technical staff to be more involved in keeping students safe online?

We have just-in-time reporting delivered to the pastoral care team and they review and action as necessary.

So we're looking for things which could be a threat, we're looking for behaviour which might be considered self harm.

There's many different aspects of this and having a product which can take all of that, and do some reporting back to the people who have day-to-day responsibility in the wellbeing of our students, that's fantastic.


In your opinion, where does the school’s duty of care begin and end when it comes to the online activity of your students?

This one's a really easy one. Ours is a fully managed environment. We supply and manage all of the student and staff devices.

And for us it is 24/7, those student devices always connect back and run through the Palo firewalls which have continuous reporting connected through Saasyan; so it's always.


How do you balance the risks associated with increased reliance on technology for learning with the benefits that it can bring?

It's a combination of due diligence when looking at technology abuse. Along with working with the teaching and learning teams to educate and inform the frontline staff and the students.


What is your approach and philosophy when it comes to the use of AI for learning?

We're embracing it and we're recognising the pitfalls.

We're guiding them how to use it effectively. If you're looking at a result just because it’s comes back with something doesn't mean it's true.

OLMC developed an ethical use of AI policy and this is used across the entire school. Everyone is routinely reminded that you can't necessarily trust results that technology returns. Or in the case of AI, what it generates.

You know it's important that we develop these critical thinking skills so we can appropriately question what we're seeing on our screens. Where, for example, AI can be of immense use is automating those analytic tasks to surface the deeper information that we haven't seen before.

We still have to process and interpret, but with that deeper exposure to data and trends we can make better decisions going forward.

So, good use, be sceptical, research the results, provide references... and really be mindful of, it's not just about the end result, just like in math, it’s moving now towards, show your work. That critical thinking skill is really, really important.