• Skip to content
  • Skip to navigation
  • Skip to footer
Mobile Header
EISAU
  • Visit our Website
  • Contact EISAU
  • Newsletter Archive
  • Subscribe to Newsletter
  • Like us on Facebook
  • Follow us on Twitter
  • Visit our Linkedin
  • Print Newsletter
EISAU
  • Visit our Website
  • Contact EISAU
  • Newsletter Archive
  • Subscribe to Newsletter

EISAU RECOGNISES

Across_Australia.jpg

Armstrong.png

Cash_McInnes.png

Coloursplash.jpg

Cool_Times.jpg

cpc.png

Cross_Point.png

Fleetwood.jpg

Full_Colour.png

Green_Recycling.png

Master_Dry_Cleaners.jpg

Meerkat.png

Oz_Sports.jpg

Safe_Principals.png

Sharp.png

South_Coast.jpg

SVA.png

Toshiba..jpg

Unique_Building.jpg

Wingaroo.png
World_Projects.png
Yarra_Shade.png

Contact Us

22/728 Pacific Highway
Gordon NSW 2072

Phone: 0499 221 910
Fax:

Powered by Schoolzine

Schoolzine Pty Ltd

For more information
contact Schoolzine

www.schoolzine.com

August 2025

  • School Children and the Dangers of Deepfakes
  • Psychosocial Risks in Schools:

School Children and the Dangers of Deepfakes

In this digital age of airbrushed photos, celebrity scandals, and user-friendly apps, it’s becoming more difficult than ever to disentangle fact from fiction. One major reason is the rise of the deepfake.

What Are Deepfakes? Why Create Them?

 “Deepfake” combines “deep learning” with “fake.” Artificial Intelligence (AI) refers to computer systems which mimic our cognitive abilities, such as

  • problem-solving,
  • learning and applying new information and
  • reasoning to achieve specific goals.

This is what is meant by deep learning, machine learning, and natural language processing. Natural language processing refers to data which AI models process to imitate and recreate human voices, natural speech patterns or mannerisms.

Over time, AI programs and models constantly and gradually improve and refine their performance and output based on the vast amounts of data which they process.

These datasets include photos of magicians, recordings of open university lectures, to porn created by paedophiles, and everything in between. The AI models which create deepfakes trawl the web for certain types of content or input according to parameters defined by a particular app’s developers. They find what they are looking for and incorporate it into their output.

AI models process a wide range of sources and reflect them back at us. In a sense, generative AI is the ultimate mirror for society. Alarmingly, many AI porn apps could theoretically create synthetic porn which reproduces the features of actual victims.

We are in the wild West phase of AI – we are just starting to understand the ramifications of AI generators and the potential threats and challenges they pose to privacy, confidentiality, people’s well-being, as well as the educational and creative benefits they offer. Consequently, legal frameworks, precedents, and paradigms around the world require updating to safeguard everyone’s rights.

Deepfakes are everywhere because AI apps can manipulate different media types to generate new masterpieces based on user-specified elements or can alter elements of authentic media such as photos, video footage, or audio recordings. Many AI apps are easy to use, don’t require age verification.

Last year, countless sexually explicit deepfaked images of pop artist Taylor Swift when viral on social media. Although fans reported them, one image was viewed 47 million times before the platform X responded by removing it.

It all began in 2017 when a Reddit user fabricated sexual images of female celebrities and superimposed his fantasies onto the bodies of porn stars. By 2019, the number of deepfakes on the Internet rose to 14,678. By 2023, the number of deepfakes videos increased by 900% with social platforms hosting more than 500,000 audiovisual deepfakes.

Benign examples of deepfake use include footage in Hollywood blockbusters or use in satire or comedy routines, where deepfakes express non-defamatory opinions which aren’t detrimental to the target’s reputation or self-esteem. It’s down to intent, purpose, the context in which the deepfake in question is created and received, and the consequences it has for those it represents.

Malicious Deepfakes: The School Bully’s New Weapon of Choice

Put this tech into the hands of immature teens who watch online porn for sex education and relationship tips and you get revenge porn which can be shared ad infinitum, stored on devices to be viewed and manipulated at leisure. Once it’s on the World Wide Web, control is out of your hands…forever!

Enter apps like Nudify, the cyberbully’s weapon of choice which makes undressing any classmate or female teacher’s clothes a snap. All you need is a photo. Of course, the app will ask you to verify your age, but who cares? Men and boys can be victims too, but statistics show that this objectifying content typically targets women or girls and is usually created by men or young boys. Nudify was finally shut down in 2025, but by then it was too late for many victims.

Similar apps continue to pop up. A Google search for “free deep nudes” gave me 9,830 hits at the time of writing.

A 2019 study conducted with participants from the UK, New Zealand, and Australia found that 14.1% of Australians between 16 and 84 had been targeted by someone threatening to create, share, or distribute sexualised images of them, or actually doing so. Sensity AI, a detector monitoring online sexualised deepfakes has consistently reported that around 90% of non-consensual video content featured women. People with disabilities, indigenous Australians, and those from the LGBTIQ+ community are frequently targeted. Finding creators and distributors can be tricky because many are anonymous or post under a pseudonym.

More recently, in 2023, Professor Nicola Henry and her research team surveyed over 16,000 respondents in 10 countries to find out more about the prevalence of “AI-generated image-based sexual abuse.” Concerning statistics indicated that:

  • 7% of Australians in the sample had been an adult victim of deepfake porn - the highest rate among the countries surveyed.
  • 4% of Australians surveyed indicated they had created, shared, or threatened to share a deepfake photo or video of someone without their consent - second only to their American counterparts.
  • Australian men were more likely than women to report if they were victims of such abuse.
  • Australian men were also more likely to create or engage with it than women, and regard creating, sharing, or viewing it as less problematic than women.

Whilst many nations are making significant progress towards robust legislation to curb this trend, a few kinks need to be ironed out:

  • Although the first instance of deepfakes porn was created in 2017, in 2024, the US still had no national law criminalising either the creation or distribution of sexualised deepfake content without consent.
  • However, a few US states – Hawaii, Louisiana, and Texas - have now made it illegal to create or distribute deepfake porn, while others have criminalised only the nonconsensual distribution of such material.
  • Australia finds itself in a similar quandary – in 2024, in all states and territories apart from Tasmania, the distribution or threat to distribute deepfakes of adults without consent is a crime.
  • However, just last year the nonconsensual production or creation of deepfake porn was not a crime anywhere in Australia except in Victoria.

These gaps in the legal framework mean that malicious actors can take advantage of both young people and adults. This occurs more often than many teachers, parents and families would like to think.

Victim Profiles: The Damage Done

Take the 2024 case of Bacchus Marsh Grammar School where extremely graphic deepfakes of 50 girls in Years 9 to12 were created and circulated on Instagram and Snapchat. Images of the female students were allegedly superimposed onto pornography using a “nudify” type app. These images were taken from victims social media profiles by an ex-student.

Unsurprisingly, this callous exploitation negatively affects the self-esteem of those targeted. Francesca Mani, the 17-year-old targeted in America’s most infamous case involving pornographic deepfakes of private citizens had photos of her naked body shared in chat groups in 2023. The experience made her feel helpless, depressed, and downcast.

Educating and Encouraging Schoolkids to Create and Share Wisely

Due to the recent nature of this trend, more research needs to be done to find out more about it. Some experts believe the root cause is the unhealthy relationship teens can develop with pornography, which many access at 14. Kids often feel that sharing risky content will get them noticed by the cool kids. It’s a means of increasing their status at school.

 The little research we do have suggests that psychopaths are more likely to create and share deepfakes as a way of getting back at a love interest for refusing to be in a relationship with them.

According to this 2022 study in the journal Computers and Human Behaviour, Dean Fido and his colleagues also found that respondents generally sympathised with female rather than male victims, and that men were less likely to view deepfake porn content as concerning if it depicted celebrities as opposed to private citizens.

Research indicates that many of us are overconfident when detecting deepfakes. Ironically, the more you believe you have the ability to do so, the more mistakes you are likely to make. Scans of brain activity showed that even if participants didn’t consciously assert that a particular image was a deepfake, their brain seemed to recognise that something was strange about it. More research into how the subconscious recognises deepfakes and whether this ability can be strengthened by training would be valuable.

What’s the solution for now? Some schools opt for restorative justice. This entails holding the perpetrator accountable by getting them to meet with their victim(s), give a sincere apology and make amends in other ways. Parents, teachers, schools and wider communities can support victims by:

  • Believing the victim over the perpetrator – it’s important not to excuse the creation of pornographic deepfakes with “Boys will be boys.”
  • Offering or supporting their child (if victimised) by helping them access counselling.
  • Helping victims take down deepfakes from platforms. Asking any recipients of the shared content to delete it from phones, destroy original copies, and tell others to do likewise.
  • Reporting repeated attempts to create and/or share pornographic deepfakes to police and the Australian eSafety Commissioner, who can enforce takedown orders.
  • Complying with requests to hand over digital devices even if your child is the perpetrator.
  • Keeping copies of deepfakes received if police become involved, as these may be used as evidence if things escalate. Otherwise, advise your child to delete any deepfake porn received from friends or strangers. Help them report repeated occurrences.
  • Speaking freely about what content your children send to and receive from others regularly.
  • Discussing how they would support a friend who was targeted. Do they know the person who sent the content? If so, is it appropriate to speak to the creator and sender directly, or to a trusted adult?
  • Advising kids to refrain from sending nude pictures to anyone, even if it is someone they think they can trust.
  • Asking for children’s consent before posting their image online
  • Asking children to think before they post anything on social media. Make them aware of how an apparently harmless bit of fun can ruin someone’s reputation, and self-esteem.
  • Holding your child accountable if they are a perpetrator, whilst making it clear that you will still be there for them. Be present at meetings as a support person whether your child is a victim or perpetrator.
  • Finally, help kids critically examine the media they consume. Would Batman really advise people to smoke weed? Do you notice anything off about a suspected deepfake? Why would someone create content like this?

Spotting and Dealing with The Deepfake!

Because many apps are still not as refined as they will eventually become, most deepfakes still show tell-tale signs of what they really are. Pay particular attention to the following:

  • Does the way the person look, act, or sound ring true with the person they know in real life?
  • Are there any glitches in the footage or audio?
  • Is there any evidence of lip syncing or people’s fingers or other body parts in the wrong place?
  • The shadowing may appear off.
  • Audio and video speeds may not match.
  • Footage may be pixelated.
  • Excessive blinking.
  • Stiff or exaggerated facial expressions.

Teachers armed with knowledge of how deepfake tech works and what to look out for when suspected deepfake incidents occur, play an important role. They can pass this on to students in the form of age-appropriate, relevant content drawn from real examples in the media.

 Establishing protocols to deal with these incidents means that issues can get resolved in a timely manner before they escalate. Teach students about the importance of consent when it comes to creating or posting content of others. Creating, producing, or distributing pornographic deepfakes makes you a bully.

Families are called to support teachers by taking such incidents seriously. Educate your children by talking about the impact of such events and where they can get help if necessary.

Perpetrators should be required to take down all offending content and have their activity monitored by police, and devices seized if necessary.

Legislation also needs to clearly determine to what extent social media platforms can be held liable for continually hosting deepfake porn. If they fail to comply with new legislation but act promptly by taking down offending content or terminating user accounts, they shouldn’t get a completely free pass, although they should not be held to the same level of accountability as creators and distributors.

Schools, families, and communities need to work together to create a culture of respect and show support for victims. Perpetrators should be dealt with justly and fairly and accept that there are consequences for their actions. If legal frameworks, schools, and communities unite to reinforce the message that creating and distributing pornographic deepfakes is not acceptable, positive change can occur.

Bibliography

“Deepfakes Research Agenda.” Opportunity Labs Foundation Inc. 2025.

“End Deepfakes [:] An Opportunity Labs Initiative.” Opportunity Labs Foundation Inc. Accessed 25 July 2025.

“Sending nudes and sexting.” Australian Government eSafety Commissioner. Last updated 6 January 2025.

“Tips for Parents: Deepfakes, Synthetic Pornography, & Virtual Child Sexual Abuse Material.” American Academy of Paediatrics. Social Media and Youth Mental Health Centre of Excellence. Last updated 13 March 2025.

Dolan, Eric W. “Taylor Swift deepfakes: Psychology reveals links to psychopathy and lower cognitive ability.” PsyPost. 26 January 2024.

Dolan, Eric W. ” New research links deepfake pornography to psychopathic tendencies.” PsyPost. 22 May 2022.

Flynn, Asher. “Legal loopholes don’t help victims of sexualised deepfakes abuse.” Lens. Monash University [Magazine].18 April 2024.

Henry, Nicola. “What to do if you, or someone you know, is targeted with deepfake porn or AI nudes.” The Conversation. 12 June 2024.

Higgins, Daryl, and Gabrielle Hunt. “There are reports some students are making sexual moaning noises at school. Here’s how parents and teachers can respond.” The Conversation. 31 January 2024.

Hunt, Gabrielle, and Daryl Higgins. “Deepfake AI pornography is becoming more common – what can parents and schools do to prevent it?” The Conversation. 12 June 2024.

Landau, Shira. “Teaching Children About Deepfake Technologies.” eLearning Industry. 27 November 2021.

Lavoipierre, Ange. “The world's biggest AI models were trained using images of Australian kids, and their families had no idea.” ABC News. 3 July 2024. The 7:30 Report.

Mack, David. “This PSA About Fake News From Barack Obama Is Not What It Appears.” BuzzFeed News. 18 April 2018.

Obadia, Simone. ”Survivor Safety: Deepfakes and the Negative Impacts of AI Technology.” MCASA [Maryland Coalition Against Sexual Assault] [Newsletter]. Frontline Spring 2024 issue. 8 May 2024.

Paterson, Jeannie Marie. “‘Picture to burn’: The law probably won’t protect Taylor (or other women) from deepfakes.” Pursuit [Magazine]. The University of Melbourne. 8 February 2024.

Petrenko, Viacheslav. “Deepfake Technology: How This Cutting-Edge AI Tech Works.” Litslink StartUps Laboratory Blog. 31 January 2025.

Walker, Kalie. “AI ‘Deepfakes’: A Disturbing Trend in School Cyberbullying.” NEAToday. [National Education Association Today]. 10 April 2025.

Whitson, Rhiana. “Principals say parents need to be vigilant as explicit AI deepfakes become more easily accessible to students.” ABC News The 7:30 Report. 25 June 2024.

Williams, Kaylee. “Minors Are On the Frontlines of the Sexual Deepfake Epidemic — Here’s Why That’s a Problem.” Tech Policy Press. 10 October 2024.

Author

Dr Estelle Hélène Borrey
PhD in European Languages and Culture
s

Psychosocial Risks in Schools:

A Whole-Community Concern—And a Legal One

Is your school confident it’s meeting its legal duty of care around mental health?

Psychosocial risks aren’t just “workplace” issues. They affect the whole school community—staff, students, parents, and even visitors. When left unaddressed, these risks can result in burnout, behavioural issues, poor academic outcomes, and even legal exposure.

That’s why Safe Principals has developed CalmSchool™, a WHS-aligned program designed specifically for schools. It helps school leaders identify, manage, and reduce psychosocial risks in a structured way that aligns with their duties under the Work Health and Safety Act. Our aim is to give Principals confidence that, if challenged, they can demonstrate they’ve taken reasonable and practicable steps to protect staff wellbeing.

Why This Matters

Psychosocial risks in schools aren’t just “workplace” issues. They affect every part of the school community, from principals and teachers to students, parents, and even visitors. When these risks go unaddressed, the outcomes are real: burnout, absenteeism, student disengagement, strained relationships, and reduced learning outcomes.

Evidence from international and Australian research makes it clear: schools are critical environments for managing mental health risks, promoting wellbeing, and protecting those most vulnerable.

What Are Psychosocial Risks?

Psychosocial risks refer to aspects of the school environment, both social and organisational, that have the potential to harm students’ or staff members’ psychological or emotional wellbeing. These risks don’t stem from individual weakness, but from systemic conditions that, if left unmanaged, can lead to anxiety, burnout, disengagement, and long-term mental health issues. Understanding and addressing these hazards is essential for creating safe, inclusive, and high-performing schools.

Psychosocial hazards include anything in the school environment that can negatively impact mental health or wellbeing. This can include:

  • High workloads, role conflict, or lack of control
  • Bullying, social exclusion, or classroom disruptions
  • Exposure to trauma, grief, or community stress
  • Limited support for students with additional needs
  • Poor consultation or communication between staff and leadership
  • Job insecurity or unstable school staffing
  • Mismatch between job demands and available resources, for example, teachers expected to manage complex behaviour without adequate training or support
  • Ambiguous expectations or inconsistent school policies, leading to confusion or stress among staff and students
  • Unaddressed emotional distress or mental illness, including the stigma surrounding help-seeking in school settings
  • Poor work-life balance or lack of time for recovery from emotionally demanding situations
  • Inequity in disciplinary responses, where students from minority or neurodivergent backgrounds experience harsher treatment or exclusion

These stressors don’t act alone. They cluster, especially in vulnerable populations. Adolescents often face multiple risks simultaneously, leading to cumulative impacts on mental health, behaviour, and academic outcomes.

Who Is Affected?

This Is a Whole-Community Concern:

  • Students may face chronic stress, sleep problems, social withdrawal, or academic disengagement.
    Prolonged exposure to psychosocial risks such as bullying, lack of safety, or academic pressure can impair cognitive function, emotional regulation, and social development. These impacts often manifest in increased absenteeism, behavioural issues, and declining academic performance, especially among vulnerable students.
  • Staff often experience burnout, compassion fatigue, and reduced job satisfaction.
    Teachers and support staff may feel overwhelmed by high workloads, unclear expectations, or inadequate support when dealing with complex student needs. These pressures can lead to presenteeism, high turnover rates, and a drop in overall morale, undermining the learning environment.
  • Parents and families may be affected by the spillover effects of school-related distress or conflict.
    When children experience emotional distress at school, these challenges often extend into the home. Parents may feel powerless or frustrated if communication with schools is limited, or if support systems are unclear. Family stress can also increase when parents must manage school avoidance, behavioural changes, or unresolved bullying incidents.
  • Leaders and boards face growing legal duties under WHS law to identify and mitigate these risks, including those linked to mental health. Under the Work Health and Safety Act, principals and leadership teams are legally required to take “reasonably practicable” steps to ensure psychosocial safety. This includes implementing risk assessments, consulting staff, and introducing system-level controls to prevent harm. Failure to act can result in reputational damage, legal consequences, and declining staff retention.

What Does the Law Say?

Under the Model Work Health and Safety Act (2011), schools, as workplaces, must ensure the psychological and physical safety of staff and others, so far as is reasonably practicable.

In 2022, psychosocial risks were formally recognised as workplace hazards under updated WHS regulations. This means school leaders are now legally required to:

  • Identify psychosocial hazards (e.g., poor support, excessive demands, traumatic content)
  • Assess their risk level
  • Implement effective control measures
  • Monitor and review those controls regularly

What Can Schools Do?

While the risks are complex, prevention is possible. Based on current evidence, we recommend schools take a layered approach:

Action Area

Examples

Policy and Legal Awareness

Update WHS policies to include psychosocial risks. Train leaders in legal duties.

Staff Culture and Support

Establish peer support teams, reduce workload pressure, improve consultation.

Student-Focused Measures

Implement evidence-based SEL programs, trauma-informed practices, and positive behaviour supports.

Whole-School Environment

Improve classroom acoustics, regulate exposure to environmental stressors, and review supervision practices.

Data and Reflection

Use student and staff wellbeing surveys to identify hotspots. Regularly review actions.

Ready to protect your people and take steps toward fulfilling your WHS obligations?

Safe Principals is working with schools to implement CalmSchool™ – a tailored program that provides:

  • Psychosocial hazard assessments through extensive consultation
  • WHS-aligned psychosocial Risk Register
  • Training for leadership and staff in personal stress management techniques

Contact Safe Principals at info@safeprincipals.com.au to book a free 20-minute call. We’ll walk you through the CalmSchool program, how it was developed, who it was built for, and the practical benefits schools are already seeing. It’s an easy way to explore whether it’s the right fit for your school community.

Further Reading & Resources

Australia, Safe Work. Managing psychosocial hazards at work: Code of Practice. Safe Work Australia, 2022.

Australia, Safe Work. Model WHS Act. Safe Work Australia, 2011.

Bates, Kevin. "Australian Education Union." (2023).

Mubita, Kaiko, et al. "A Proposed Holistic Approach to Management of Psychosocial Hazards in School Environments: A Literature Review." European Journal of Arts, Humanities and Social Sciences 1.3 (2024): 94-103.

Lundqvist, Carolina, David P. Schary, Emelie Eklöf, Sofia Zand, and Jenny Jacobsson. "Elite lean athletes at sports high schools face multiple risks for mental health concerns and are in need of psychosocial support." Plos one 18, no. 4 (2023): e0284725.

Lee, Yun-Tse, et al. "Prevalence and psychosocial risk factors associated with current cigarette smoking and hazardous alcohol drinking among adolescents in Taiwan." Journal of the Formosan Medical Association 120.1 (2021): 265-274.

Mbog, Séverin Mbog, et al. "Management of Psychosocial Risks in the Higher Schools of the University of Douala-Cameroon." Open Journal of Social Sciences 10.12 (2022): 281-289.

Pacheco, Emily-Marie, et al. "Integrating psychosocial and WASH school interventions to build disaster resilience." International Journal of Disaster Risk Reduction 65 (2021): 102520.

Stuart, Heather. "Psychosocial risk clustering in high school students." Social psychiatry and psychiatric epidemiology 41.6 (2006): 498-507.

Vance, Stanley Ray, et al. "Mental health and psychosocial risk and protective factors among Black and Latinx transgender youth compared with peers." JAMA network open 4.3 (2021): e213256-e213256.

Author

Parisa Moshashaei
WHS Rescue / Safe Principals
PhD of Occupational Health and Safety

Privacy Policy | Cookie Policy