The Beatles. Showing how AI and education can mix responsibly and ethically
Edward Palmer, Director, Digital Learning and Society Hub, School of Education, University of Adelaide
For those of you under 30, the Beatles were a 4-piece rock band that started in the late 1950s and by 1963 were the biggest band in the world, renowned for their creativity and inventiveness. For those who are older, you’ll probably be more aware of the group, recognising their influence across the decades. Most of us would not expect a band, 50 years disbanded with two of its members dead, to have any meaningful impact on today’s music, and definitely not on education. I’d argue this is precisely what they have done over the last four years with their use of artificial intelligence (AI).
I’ll use a broad definition of AI in this discussion, which includes machine learning algorithms as well as generative AI. Peter Jackson, director of Lord of the Rings, produced a documentary on the Beatles for Apple called Get Back. His team took archived mono recordings and, with AI, isolated discussion between band members from their instruments and remixed them, leading to greater clarity. This eventually led to revisiting an old track recorded by band member John Lennon, now deceased.
In the 1990s, the band had attempted to produce the same song but had no success due to poor audio quality but, in 2024, the band used AI to strip out Lennon’s voice from background noise, mixed it in with music from all other band members and released it. Whilst having solid modern production values and hitting number 1 in the UK and the top 10 in 12 other countries, it likely would have remained a musical oddity; except for the Grammys. The Grammys are one of the top 4 awards in entertainment in the United States (along with the Academy Awards, Tony Awards and Emmys). In 2024, the Beatles won the award for Best Rock Performance, making it the first AI augmented song to win an award. Cue controversy and a bit of indignation.
Imagine the corollary of this in many higher education institutions right now. “The Dean’s prize for best project goes to Jessica, and we note that your use of AI to develop this project was excellent.” This too would not pass without notice.
We are currently in crisis mode: we do not really know what to do with AI. We have worked hard developing technology to support its use in courses (e.g., University of Sydney’s Cogniti with its attendant staff and student resources) but the moment it comes to ensuring learning we struggle to find a pathway forward. How can we allow students to use AI tools, essential for their future careers, and balance that against the requirement for evidence of meaningful learning? Concerns around developing strong foundational knowledge and critical thinking skills abound, as do the nature and role of assessment tasks. In part, these concerns may be mitigated by policy and technology, however, a little haste and purpose is required. A recent study reported in THE showed that 9/10 students were using AI to help with assignments and 25% used it in submitted tasks.
Some place faith in detection tools but there is little evidence that AI detection tools will be sufficiently reliable on their own. Our progress in managing the use of AI will eventually depend on the culture of responsible AI use we can develop amongst our students. Part of that is being clear about what is valuable for student learning and being innovative with curriculum, but it is also about having a modicum of trust that our students will do the right thing if they can work out what that is. We can help by modelling good behaviour, but also by being willing to change our long-held approaches to curriculum design and implementation. Fear and uncertainty in students have stifled student adoption to date but that period has passed. Measured action involving the students is important.
Maybe what is needed now is to take a leaf out of The Beatle’s book? Use AI appropriately, but try to understand the risks, declare how it’s been used and be prepared to reward those who have used AI meaningfully and creatively. And then use what we learn to define new, acceptable boundaries for its use. That’s a big step but likely a necessary one for universities to use this technology appropriately and be community thought leaders in the inclusive, ethical and sustainable democratisation of AI affordances.
Let’s turn our fears into creative beacons and find a way to teach our students, and the public, to find their own ways to learn widely using all available technologies.
Professor Edward Palmer, Director, Digital Learning and Society Hub, School of Education, University of Adelaide