Beyond the chatbot: A framework for navigating the era of agentic AI in higher education
Jason Lodge, The University of Queensland
While it seems hard to believe at times, it has now been well over three years since ChatGPT emerged, sparking a flurry of activity (and moral panic) across multiple fronts. In education, it is fair to say that a great deal of the discussion over that period has focused on assessment and academic integrity.
The cheating issue has again blown up over the last two weeks, initiated by some reporting in the Australian mainstream media. While the problems that generative AI and automated decision-making systems have created for academic integrity remain a work in progress, there has been a noticeable shift in emphasis in the global conversation about AI in education over the last few months.
The commentary about the likely impact on labour markets is accelerating, partly spurred on by this piece from Matt Shumer (it’s also worth reading this counter-argument). This trend towards asking bigger questions is perhaps best exemplified in education by the recently released 2026 Digital Education Outlook from the OECD.
What is clear from the OECD report and, indeed, from a renewed push to ‘ban AI’, is that significant concerns remain about the moral, ethical, environmental, and extractive factors related to these technologies, in terms of both development and use. However, there is also growing recognition that, in many cases, the use of generative AI (and increasingly ed-tech more broadly) is simply bad for learning (I will resist another e-bike or gym-based simile here).
Beyond the initial shock – an enmeshed relationship with technology
Having said all this, we cannot simply bury our heads in the sand. Banning AI is not a viable option, and this is not merely because we cannot detect its use. The life and future trajectories of students in higher education are becoming increasingly complex across social, economic, and technological dimensions. Their careers are longer, their learning paths are non-linear, and they expect a tertiary experience that is as fluid and responsive as the technology they carry in their pockets.
AI is not going away. If Matt Shumer is even remotely correct, AI is increasingly becoming intertwined with many aspects of our lives and work and will lead to significant societal and economic upheaval.
Amongst all the uncertainty, what appears certain to me is that we have reached a defining moment in our relationship with technology. We have moved beyond the initial shock of generative text into the era of agentic AI; autonomous systems that plan, act, and interact across our institutional workflows.
There is a lot of discussion about an ‘AI bubble’. Regardless of what happens from here, the interaction between humans and machines has fundamentally changed. The question is no longer about whether AI should be integrated into education, but when and how.
An Australian Framework for Artificial Intelligence in Higher Education
In this context, why do we need the Australian Framework for Artificial Intelligence in Higher Education? Surely, our existing regulations and guidelines are sufficient (some say, already excessive)?
There is a persistent criticism that Australian higher education is over-regulated. I have previously lamented this situation myself. Critics argue that the Higher Education Standards Framework (Threshold Standards) 2021 (HESF) already require the sector to ensure award integrity and student support. They suggest that technology-neutral laws covering privacy and consumer protection should be enough to manage the AI ‘wicked problem’. But this view misses the fundamental challenge of our current moment.
While the Threshold Standards are our definitive regulatory instrument, they are necessarily high-level and principles-based. They provide the ‘what’, but in an age of autonomous agents, they do not provide the ‘when’ and ‘how’.
Published by and with the support of the Australian Centre for Student Equity and Success (ACSES), the Australian Framework for AI in Higher Education serves as an essential interpretative bridge. It was not intended to replace the Threshold Standards or create additional regulatory burden; it was developed to help navigate existing standards in this new technological epoch. The Framework translates the abstract requirements of the HESF into the intentional curriculum and whole-of-institution strategies needed to ensure no student is left behind.
Inspired by and borrowing heavily from the Australian Framework for Generative AI in Schools, the author team (who advised on the Schools Framework) set out to develop a Framework specifically tailored to the Australian higher education context.
A direct translation of the Schools Framework was never going to be sufficient, given the many differences in how the sectors are governed and structured. We aimed to provide specific guidance about the impact of AI aligned with our core Threshold Standards and beyond to the UN’s Sustainable Development Goals.
Operationalising the HESF via the Framework
Take, for example, HESF Standard 2.2 on Diversity and Equity. Institutions must provide equal opportunities for success. But how can this be operationalised when there is enormous diversity in student populations in terms of access and preparedness to engage with these technologies? Then there is also the ongoing instability of subscription costs to be mediated. Some estimates suggest that subscription costs for frontier AI models could reach $2,000 USD per month this year, as AI companies seek to recover the enormous costs of model development.
For this example, the Framework provides a schema for applying the HESF standard to AI implementation, focusing on Fraser’s model of social justice: redistribution, recognition, and representation. It forces institutions to ask whether AI systems are redistributing opportunities fairly, recognise Indigenous data sovereignty, and ensure diverse student voices are represented in institutional governance structures.
Similarly, HESF Standard 1.4 requires institutions to ensure that assessment methods are capable of certifying student attainment. In 2026, this cannot be achieved through timid incrementalism, by simply trying to out-design the latest model, or by giving students crude traffic light indicators to work with.
While extensive advice about how to adapt assessment has been published by TEQSA, a Framework that directs institutions toward systemic reform beyond redesigning assessment tasks is also needed. One that prioritises human-centredness and the development of adaptive capabilities over transient technical mastery.
Focusing on the quality of learning to the benefit of all students
As the renewed calls to ban AI suggest, there is also a need to focus on the quality of learning in all this.
We must resist the temptation to rely on road rules to regulate drones. The unique autonomy of modern AI requires targeted guidance that clarifies our expectations for equity, transparency, contestability, and workforce readiness. The Framework provides this clarity, ensuring that institutions and educational leaders can confidently assure the regulator that the integrity of our awards remains untarnished. In this sense, I am referring to integrity far beyond the level of security of assessment tasks.
Ultimately, this is about doing good for the benefit of all students, whether or not they have the means and capabilities to access and get the most out of frontier AI models. This Framework aims to ensure that, as AI becomes more embedded, powerful, and invisible, our commitment to human flourishing for all students remains visible and unwavering.
Bridging the gap between standards, policies, and practice is becoming increasingly complicated as the world rapidly changes. The Australian Framework for Artificial Intelligence in Higher Education is designed to help provide the bridge.
Professor Jason Lodge is head of The Learning, Instruction, and Technology Lab and Professor of Educational Psychology in the School of Education, The University of Queensland and Managing Editor of Needed Now in Learning and Teaching
