What to do about artificial intelligence now: Examining the real impact on learning
Jason Lodge, The University of Queensland
It has now been over two years since ChatGPT emerged and thrust nascent technologies in the form of generative artificial intelligence (AI) into the spotlight. For much of the time since, the focus across education sectors globally has (understandably) been on managing the risks to academic integrity that AI has exacerbated and created.
Generative AI indeed provides avenues for students to cheat on assigned tasks. This is a serious and ongoing problem. However, underlying questions remain unaddressed and out of focus. AI represents as much of a catalyst to do things differently as it does a challenge or risk. What and how to teach and learn in the age of AI is increasingly becoming a critical and complex issue. The last few weeks of developments again highlight how rapidly AI tools are evolving. Higher education is falling behind.
To be fair, the emergence of generative AI highlights existing issues in education as much as it creates new ones. The problems will not be addressed through straightforward solutions, such as increasing ‘AI literacy’ (whatever that is) or emphasising critical thinking.
The results of numerous studies on student and staff opinions, attitudes, and usage rates were released in 2024. These data are an essential part of the puzzle. However, these results are far from conclusive and should not be the basis of action in isolation. There are profound and wicked issues with these technologies in education, requiring deep exploration of the processes involved. A recent meta-analysis highlights the distinct lack of rigorous studies into the impact of generative AI on learning (though, as some have pointed out, there is potential).
The underlying mechanisms remain largely a mystery and will not be surfaced through surveys and interviews alone. Perceptions of these technologies may indeed be orthogonal to what is really going on.
Proceed with caution
Extreme caution is required when it comes to any technology being used in education, particularly AI. As global AI in education luminary Professor Rose Luckin has said, we need to “learn fast but act slowly”.
The technologies in question are often not developed with learning and/or teaching in mind. More often than not, the technology is designed for consumer or productivity purposes. Consumer technologies are usually poor tools for facilitating high-quality learning. A customer-oriented approach to understanding these technologies in learning is similarly problematic. Learning is not a consumer product to be exchanged; students are not customers, and the customer is not always right. Education may increasingly look like a commodity, but learning isn’t and never will be.
Well over 18 months ago, it was already evident that the main reason why students use generative AI in their learning is because it is easier and faster to learn and complete assigned tasks. The most impactful learning is and is supposed to be hard. Easy and fast is rarely the way to go if learning is to stick. The ‘work of learning’ matters, but our human tendencies are to find hard mental work undesirable and to avoid it.
How students are using these tools is one of several critical misalignments between perceptions and evidence. In our recently completed study, two-thirds of the students who participated confidently said that AI will benefit their learning because it will allow instruction to be tailored to their (modality-based) learning style. This is despite the overwhelming evidence that designing for learning on the basis of people being ‘visual or auditory learners’ is a harmful edu-myth. Self-reported opinions about learning and its gain are necessary but can be misleading and are insufficient; they always have been.
Many in the educational technology research community will remember times when we have seen this trend previously. In the past, students’ opinions and usage rates of social media sites were used to justify the widespread adoption of these sites in education. For anyone not paying attention, that particular experiment is not going well (even before the heads of some of the leading social media sites were outed as power-hungry misogynistic sociopaths).
Not only is there a problematic history of decision-making associated with implementing technologies in education, but the issue is compounded in this instance. Generative AI represents a fundamentally different human-machine relationship than has been the case with previous technologies. If you find yourself tempted to say 'please' and/or 'thank you' to a generative AI tool, you have experienced what I am referring to here. Generative AI is a tool that simulates a peer, collaborator, or colleague, and an increasingly capable one. The implications of this new human-machine dynamic for learning and education remain unclear at best and profoundly concerning at worst. A growing chorus of commentators argue that AI should largely be kept out of education.
The critical voices (aka ‘doomsters’) may have a point. When we look into the processes involved, it is apparent that the more fluent a technology feels, the more likely we will judge the information it presents to be credible. We are also more likely to feel confident that we understand the concepts we have been exposed to, even if we don't. Generative AI is the ultimate Dunning-Kruger machine – it’s clear, it’s confident, but it is also often wrong. We cannot rely on people's own opinions of the usefulness of this technology. Many (most?) of us don't understand what we are dealing with and, despite frequently providing incorrect information, these technologies can be incredibly persuasive, particularly for novice learners.
Needed now: Moving beyond opinions
Opinions, attitudes and usage rates among staff and student populations are critical, don’t get me wrong. Some impressive, large-scale work has been carried out. These data must be part of the mix when policy and practice shift to be befitting of the age of AI. It would be immoral not to listen to the voices of those most impacted by these technologies when deciding what to do about them.
However, we need other forms of evidence and measures to critique and evaluate these applications in terms of both the risks and the opportunities. In particular, a deeper understanding of the role these technologies play in learning processes is vital before making sweeping changes (if indeed sweeping changes are required).
We must draw upon the 30-year-plus history of rigorous research and theorising on educational technologies. What we need now is a three-pronged approach:
rigorous systemic data on the impact of AI on learning that goes beyond self-reporting
robust theoretical frameworks to understand the role of AI in learning and teaching (including, but going well beyond, academic integrity and assessment), and
evidence-informed approaches for implementing and evaluating AI across educational contexts drawing on a range of methodologies and methods.
Many educational technology researchers have long advocated for careful, evidence-informed approaches to implementing new tools (i.e. don’t believe the hype). The stakes are perhaps now higher than ever. With these powerful technologies, truly dystopian visions of the future of education are not just theoretical possibilities but entirely feasible realities (AI teachers, anyone?). Solid evidence is needed to ensure that the power of AI is deployed (or not) for the right reasons, in the right ways, and for the equitable benefit of all.
We wouldn't base important policy and/or practice decisions in medicine and health solely on patient opinions and shouldn't do so in education. Student and staff voices are essential, but they aren’t enough. It's time to move beyond accepting self-reported data as sufficient evidence for action in higher education. Without clear connections to learning theories and underlying mechanisms, opinions must be treated as just that – opinions.
The future of education in the age of AI demands a rigorous and thoughtful approach built on established theory and an understanding of the fundamental learning processes involved. The hard work of figuring out what role (if any) AI can and should play in learning in higher education starts now. With ongoing rapid changes in the technologies, this work will be challenging. Nonetheless, we need to understand the real impact of these technologies, building on the picture emerging from the extensive survey and interview research carried out to date.
Acknowledgement: Generative AI (Claude; 3.5 Sonnet) was used to edit this article for clarity.
Professor Jason Lodge is head of The Learning, Instruction, and Technology Lab in the School of Education, The University of Queensland and Managing Editor of Needed Now in Learning and Teaching.