Why banning the banning of AI in assessment is all about increasing trust and agency
Danny Liu and Adam Bridgeman, The University of Sydney
Recently, two compelling pieces have captured the attention of academics, the general public, and many commentators on social media: Everyone Is Cheating Their Way Through College, and The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It. These well-timed articles almost pit students against their educators in a combative dance of integrity, workload, trust, and humanity. Whilst this us-versus-them stance is unhelpful, these pieces do point out an important elephant that has been romping around higher education since November 2022: that there are non-humans demonstrating (increasingly impressive) human capabilities in our courses.
The futurists tell us to look for signals – things that might point to something bigger down the track. For example, that employers and graduates are increasingly questioning the value of our degrees. Or that public confidence in universities has been declining. Or that some firms are taking the much-decried move of hiring artificial intelligence (AI) instead of human labour.
All these signals point to core issues around the increasingly questionable value of a higher education award – both in terms of integrity and relevance. If we cannot affirm with integrity that those walking across the graduation stage have the capabilities we say they have, then we are not serving the community nor employers and may in fact be causing delayed harms. If we cannot grow students into citizens and leaders who can engage responsibly and effectively with contemporary technologies (including AI), then we are not serving society nor students well.
AI and education: Questions of trust and agency
One might rightly say these issues have always been challenges for higher education. Yes – but not at this scale, not with this rapidity of onset, and not with this level of unpredictability. Generative AI’s well-documented ability to complete a range of assessments to high standards, without reliable detection, means that many more students than before are potentially getting through their courses without building the capabilities that they need to succeed in the future. It may not be long until we see headlines such as, ‘Markets plunge as CFO admits using AI for MBA coursework’ or ‘My therapist's degree was written by AI' or ‘Patients hospitalised after psychologist's fraud exposed’. If the sector does not take assessments that purport to assure degree learning acquisition seriously, and we go around just saying to students “this is a level 3 assignment, you may use AI with editing your writing, but you must modify and own any AI content”, we may see these headlines sooner rather than later.
Figures – fictional futures, generated by AI
This cuts to the core of trust, integrity, relevance, and the value of a higher education award – for our students, for the community, for employers, and for society. Apart from these moral imperatives, higher education institutions that get this right now may, in a few years, see preferential enrolment and vastly improved graduate outcomes, which will be a virtuous, self-perpetuating cycle. Other institutions that stick with discursive (“you may only use AI for…”) rather than structural changes to assessment could well see declining employer and community trust, and declining enrolments soon after.
One model for a possible values-based approach
What can we do? The University of Sydney has recently put into official university policy our two-lane approach to assessment which came out almost two years ago. Aligned with TEQSA’s recommendations around assessment reform, the two-lane approach says that we need:
‘secure’ (lane 1) assessments that can help us form trustworthy judgements around student capability, and
‘open’ (lane 2) assessments that can help equip students for contemporary society where AI is ubiquitous.
Our recent policy changes mean that educators cannot restrict or ban AI in lane 2 assessments, which, arguably, is in keeping with reality. Rather, lane 2 assessments are used as opportunities for learning and feedback, where students can develop their capabilities, often with the support of AI as scaffolded by their educators. Their capabilities are then measured in supervised, face-to-face lane 1 assessments which are, as much as possible, redesigned to operate at a program level.
Some suggest that the two-lane approach is overly binary, eroding educator autonomy while ignoring assessment complexity. On the contrary, our enactment of the two-lane approach has defined 13 different lane 1 assessment types and 16 different lane 2 assessment types in categories aligned to Laurillard’s learning activity types. Lane 1 also does not mean ‘no AI’. Rather, it means the secure determination of student capabilities which, in an increasingly AI-infused society, may indeed involve evaluating how students engage with AI critically within their disciplinary context. This is supported by an alternative metaphor of a menu of ways that educators can use to thoughtfully engage students with AI use. This encourages agency; in lane 2, educators guide students towards ‘healthier’ choices, knowing full well that students could pick everything from the menu (i.e., use AI for everything) and ‘get sick’ (i.e., not learn), which will be picked up in well-designed lane 1 assessments. Students have the agency to choose how they use AI, knowing they still need to develop core capabilities as these will be securely measured.
From the value perspective, lane 1 addresses the value of integrity, and lane 2 addresses the value of relevance. But more fundamentally, the two-lane approach is about agency and trust. Our role as educators is to teach, not to police. As Cath Ellis famously says, we are in the business of detecting learning, not detecting cheating. As teachers, we educate students on, amongst other things, responsible and effective ways to use technologies – which now includes AI. If they choose not to learn, their lack of capability will be found out through well-designed lane 1 assessments. Again, our role is to measure learning, not to police cheating.
This approach actually gives more agency to educators in guiding students through the thoughtful use of AI (and there are many!). It encourages us to trust students to learn and develop capabilities. It gives students the agency to decide how they engage with AI in their open assessments, instead of providing (transparently futile) restrictions. We, and other universities that take this approach, may also be the only ones who will be able to say to employers, students, and the community that they can trust that our graduates have the contemporary capabilities we say they do.
The futurists also say that the point of looking into the future is so that we can steer it, rather than the future happening to us. Counterintuitively, AI forces us to think more deeply about what it means to be human. Instead of increasingly combative stances, how can we rather reclaim the agency and trust between teachers, students, employers, and the community – while thoughtfully engaging with the reality of AI? Alternatively, how can we maintain trust if we simply ignore or forbid a technology, which is pervasive and already impacting our students’ present lives and future careers?
These big future-focussed questions rely on us embracing our core mission as educators to develop and measure capability, in ways that balance the trust and agency of everyone invested in this richly diverse sector of ours.
Professor Danny Liu, Professor of Educational Technologies, DVC Education Portfolio, University of Sydney
Professor Adam Bridgeman, Pro Vice-Chancellor, Educational Innovation, University of Sydney.