Abstract
The rapid integration of Generative Artificial Intelligence (Gen AI) into Australian higher education has revolutionised pedagogical practices whilst simultaneously amplifying online harms, with profound implications for cybercrime prevention. This paper examines university strategies to combat AI-generated online harms—encompassing scams and fraud, online sexual violence, hate speech, radical content linked to violence, victim-survivor recovery, ransomware, misogyny, sextortion, human trafficking enablement, image-based abuse, and harms from pornography—as an escalating threat to faculty and students. AI-generated misinformation, often biased or fabricated, erodes academic trust and integrity, creating pathways for cybercrimes such as phishing and data breaches that compromise institutional security and personal safety. Although Australia has robust legislation like the Online Safety Act 2021, institutional responses to AI-related threats in higher education remain under-researched. This study employs qualitative content analysis to explore official documents, including policies and procedures from 37 public Australian universities, alongside government reports and academic literature. Thematic analysis, facilitated by MAXQDA software, investigates institutional approaches to minimising online harms through reducing AI-driven misinformation and assesses their effectiveness in preventing cybercrime. Preliminary findings suggest that academic policies prioritise integrity violations, such as AI-assisted plagiarism, yet offer limited specific measures against AI-enhanced threats like online hate speech or sexual violence. This gap exposes vulnerabilities to cyber threats, intensified by Gen AI through sextortion and fraud enabled by synthetic media. The research bridges educational and cybersecurity discourses, offering a distinctive Australian perspective on reducing digital threats. It contributes to ongoing discussions about ethical AI implementation by proposing actionable recommendations—such as enhanced AI literacy and tailored safety policies—to bolster digital resilience. Ultimately, this study delivers a university framework to counter misinformation and cybercrime, fostering safer, more equitable academic environments and addressing a critical nexus of technology, education, and security.
Original language | English |
---|---|
Publication status | Accepted/In press - 2025 |
Event | 25th Annual Conference of the European Society of Criminology - Athens Duration: 03 Sept 2025 → 06 Sept 2025 |
Conference
Conference | 25th Annual Conference of the European Society of Criminology |
---|---|
City | Athens |
Period | 03/09/25 → 06/09/25 |