Ten persistent academic integrity myths
Mark A. Bassett, Charles Sturt University; Kane Murdoch, Macquarie University
In this article, we describe ten of the most persistent myths about academic integrity, exposing their flaws and the risks they pose to students, institutions, and the broader credibility of higher education.
1. Academic integrity must be ensured at all costs.
If taken literally, it would mean prioritising detection and punishment above all else, potentially leading to policies that assume guilt, shift the burden of proof onto students, or disregard procedural fairness. When due process is ignored, it signals that evidence matters less than authority, and that institutional control is more important than fairness. The Higher Education Standards Framework (HESF) does not support an approach that prioritises integrity at the expense of student rights. Institutions that attempt to ensure integrity at all costs would be in direct conflict with multiple HESF standards.
2. If the assessment is secure, there’s no problem.
Security measures may reduce the likelihood of misconduct, but they do nothing to address the more fundamental issue of whether an assessment actually measures what it is intended to measure. If an exam or assignment does not align with the learning outcomes, then its security is irrelevant. Worse, the emphasis on security can shift students towards extrinsic motivation, treating assessments as something to be survived rather than an opportunity to demonstrate learning.
3. Unsupervised assessments can be secure.
If you’re not in the room, you have no idea who or what did the work or what external assistance was provided. Some institutions attempt to mitigate this risk through digital proctoring, keystroke analysis, or behavioural tracking. However, these approaches introduce their own problems. Ultimately, if an assessment is designed in a way that assumes remote security measures can fully replace direct oversight, it’s vulnerable to misconduct.
4. Contract cheating is a minor issue that our students don't engage in.
The belief that contract cheating is only a problem at other universities persists because institutions are often reluctant to acknowledge it and lack the capability to detect it. Low detection rates do not mean contract cheating is rare, they simply indicate that institutions are failing to detect it. Current research indicates that 10-15% of all students in Australia engage in contract cheating. Integrity investigators have proven that contract cheating teams have completed entire degrees to a passing standard on behalf of students. When institutions assume their students do not engage in contract cheating, they inadvertently create an environment where the risks seem minimal and the rewards remain high, thus contract cheating will persist.
5. AI ‘detection’ has a place in education.
Much has been said and written about the use of AI detectors and their associated problems. Using them as an indicator of AI use is inadvisable, to say the least. The 'we only use it as a red flag which we then further investigate' approach is also flawed, as AI ‘detection’ tools are too unreliable to serve even as preliminary indicators.
6. A single mitigation tactic is sufficient.
If a single intervention could prevent academic misconduct, the problem would have been solved long ago. The Swiss Cheese Model is based on every strategy having weaknesses, and when used alone, those weaknesses remain exposed. The best response is multiple, overlapping deterrents. However, simply stacking multiple interventions on top of each other does not create a robust system. Institutions need a structured approach that responds to different types of misconduct in proportion to the risk. The Educational Integrity Enforcement Pyramid recognises that most students intend to do the right thing, some require guidance, and a small number are determined to cheat.
7. Process tracking is reliable evidence of authorship.
Institutions often encourage students to save draft versions of their work as proof of authorship, with some using Track Changes or document metadata as evidence that an assignment was developed over time. However, AI tools have now made this strategy unreliable. OpenAI's Operator and Deep Research models can generate, edit, and iteratively refine a document over time, mimicking a human drafting process. The resulting revision history can be indistinguishable from that of a human writer.
8. You can make your unsupervised assessments ‘AI-resilient’ or ‘AI proof’.
As previously argued here, whether GenAI can think or not has nothing to do with what it can output. Some GenAI software can output well-reasoned case studies, solve applied problem-solving tasks, write personalised or lived experience accounts, and reference perfectly. GenAI is ‘frighteningly close’ to producing a passing PhD dissertation.
Stress-testing unsupervised assessments against GenAI to ‘confirm their vulnerabilities’ assumes that if a single person using a single model cannot get GenAI to succeed, then no one else will either. This is naive at best and negligent at worst. Declaring an assessment ‘AI-resilient’ based on limited testing is not just flawed reasoning, it is institutional complacency masquerading as security.
9. AI has made other forms of misconduct obsolete
Since November 2022, AI has simultaneously accelerated and steadily diminished the plausible deniability that educators once had regarding students, assessment, and integrity. Unfortunately, AI has only added to the range of options available to students to avoid learning. Students will adopt different approaches based on such factors as urgency, the need for assistance to reach passing grades, the availability of less risky alternatives, and the likelihood of detection. Following the release of ChatGPT, universities continue to report cases of plagiarism, collusion, contract cheating, and exam misconduct, all while attempting to formulate effective responses to the illegitimate use of generative AI.
The rise of GenAI has further complicated contract cheating, as some contract cheating providers now use GenAI to create custom essays, provide real-time exam answers, and evade detection software. Research has shown GenAI has made contract-cheating providers more profitable and harder to combat. Ironically, some students have turned to contract cheating to avoid getting caught using GenAI. Instead of risking it with a ChatGPT-written assessment, they choose a contract cheating service that claims a real person will write them an original, plagiarism and AI-free assessment.
10. The academic integrity challenges posed by GenAI can be policed or invigilated away.
If more policing and invigilation were the solution, TEQSA would have never sent out their RFI last year. The regulator would have simply mandated that institutions must detect GenAI (using humans and software) and supervise every existing assessment. Putting AI ‘detectors’ aside (we strongly suggest you do the same), the true absurdity lies in the belief that assessment structures can remain unchanged, with increased surveillance as the sole adjustment.
Most students are already over-assessed, buried under a constant stream of graded tasks that institutions appear unwilling to reimagine. If every current assessment had to be supervised, institutions would need an army of invigilators, an exponential increase in exam space and budget, and a willingness to sacrifice flexibility, accessibility, and any meaningful engagement with deep learning. Instead of locking down every assessment, institutions should prioritise those that genuinely measure meaningful learning.
Associate Professor Mark A. Bassett is Director, Academic Quality and Standards, and Academic Lead (Artificial Intelligence) at Charles Sturt University.
Kane Murdoch is Head, Complaints, Appeals, and Misconduct at Macquarie University and author of Guerilla Warfare, a blog about cheating, academic integrity and the future of education.