Today, generative AI applications (GenAI) are freely available to all higher education students and are being rapidly integrated into a wide range of software and hardware. Australia’s HE regulator recently announced that it would be “shifting from an educative-led approach to a regulatory-led approach” to this situation, likely by the beginning of 2026:
“TEQSA expects providers to be able to demonstrate how they are managing these risks to assessment integrity and associated risks to compliance with the Threshold Standards. TEQSA will be recalibrating our regulatory processes for provider registration and course accreditation to reflect this.”
There are a range of concerns with the shift to "secure/open" assessment (the "two-lane approach"), including threats to equity, limited validity of "secure" tasks, poor pedagogy and erosion of academic freedom. But if not this, what will we do about the GenAI assessment integrity crisis?
I and others have outlined various assessment security measures that are not the answer:
Locking down assessment conditions (timed, in-person, invigilated methods like exams) is inequitable and exclusionary for countless students.
AI detection software like Turnitin is inaccurate, discriminatory and breeds deep distrust, as Mark A. Bassett and colleagues show.
Process tracking technologies like Cadmus are corrosive to educational relationships (Marc Watkins), and they cannot verify authorship (Mark A. Bassett and Kane Murdoch).
AI-use scales, like the AI Assessment Scale, are assessment design tools, not assessment securitytools, and cannot be used to enforce controlled use (Leon Furze).
Banning generative AI use is unenforceable in “open” assessments (Danny Liu and Adam Bridgeman), and, as James O’Sullivan argues, in “secure” assessments as well.
We have a lot of problems here, and we need to agree on what they actually are if we’re going to recognise a solution when we see it.
So what are the problems?
Here are some things I am seeing across the sector, from my vantage point as a teaching academic, PhD candidate, learning designer and digital learning consultant.
Everyone else is using it, and nobody’s happy about that
A vast but unknown number of university students are using GenAI to study and to produce assessment work. Whether or not this is “cheating” is a matter of hot debate amongst my colleagues across the sector. Many academics are also using GenAI to design assessments, mark student work, and produce other academic outputs such as research.
We’re seeing a silent standoff, where teachers don’t want their students using GenAI in assessments, students don’t want their teachers using GenAI to mark their assessments, but many people in both camps feel their own use is acceptable.
University doesn’t seem to mean what it used to
The overwhelming majority (91%) of university students in Australia are in paid work: 53% part time and 38% full time. Leaving aside caregiving and other significant responsibilities, students are juggling a lot, and pressure is high.
University qualifications are awarded to students solely on the basis of passing assessments across a program of study, and many teachers feel pressured to pass all students, even those they suspect of cheating. Is this appropriate? Is a “pass” enough? Is passing assessments the only criterion that should be met to get a degree? If so, what should the “pass” conditions be?
Our educational relationships have broken down
In the current massified higher education system, most undergraduate programs have such high enrolments that many teaching academics know little more about their students than their legal names — and sometimes not even that.
Passing off somebody else’s work as your own under university assessment conditions has been not only possible, but quite easy for decades due to freely-available internet content, essay mills and other contract cheating options.
Assessment security is inherently adversarial and punitive, focused on catching and punishing students who have done something wrong. It’s not the same thing as academic integrity (which is educative and values-driven) or assessment integrity, the term used in TESQA’s messaging. It’s becoming increasingly unclear what it is we’re trying to defend, and whether it exists at all — now, previously, or in any version of the future.
What solutions do we have?
With these factors in mind, it might seem as though any action we take will force us to compromise on some value or other. Practices of verification push against student-teacher trust. Validity pushes against fairness. Money pushes against everything.
But if we truly believe in honouring the values of education, there exist a range of pedagogically-informed, equitable and practical strategies for improving the validity of higher education assessments in this context.
Here I offer three proposals for consideration by universities seeking meaningful assessment reform. They are not all dependent on one another, but the third is heavily informed by the values of the other two — so while you don’t have to read them in order, you will see connections between them all:
PROPOSAL 1: Revise assessment standards to re-centre the purpose of each task and enable teachers to evaluate each piece of student work in a way that fulfils that purpose.
PROPOSAL 2: Foster long-term educational relationships by creating conditions for teachers to sit beside students throughout their learning and develop a picture, not only of their achievements, but of their journeys towards them.
PROPOSAL 3: Decouple education from qualifications, acknowledging that each plays a different role in a person’s life: one is to make meaning, and the other is to verify competence.
Please explore each of the proposals above — or just the one that interests you. They are each incomplete and speculative, but driven by a desire to shift this conversation from securing assessment conditions to the core mission of education.
Miriam Reynoldson is a PhD candidate at RMIT University’s Social Equity Research Centre. Her research explores the value of learning in times of digital ubiquity.
