We Need to Do the Parallel Work: Rethinking Learning Outcomes in the Age of AI
Prue Laidlaw, Charles Sturt University
Effective learning occurs when learning outcomes, assessments, and teaching activities are intentionally aligned, ensuring students develop the skills needed for an AI-integrated future. As AI reshapes industries and professional expectations, universities must proactively revise learning outcomes to equip students with the critical thinking, adaptability, and ethical reasoning skills essential for a rapidly evolving workforce.
Drawing on Biggs and Tang’s (2011) research on constructive alignment, meaningful learning happens when what we intend students to learn is directly supported by how we teach and assess them. A well-aligned system ensures students can demonstrate and apply their knowledge in real-world contexts.
While many institutions have focused on making assessments AI-resistant or AI-integrated, fewer have addressed the parallel challenge: ensuring that learning outcomes still reflect meaningful, relevant, and measurable competencies. In some cases, traditional learning outcomes have become outdated, infeasible, or too easily achieved using AI.
To maintain academic integrity and educational value, learning outcomes must guide assessment design—not the other way around. If institutions focus solely and wastefully on making assessments ‘AI-proof’ or ‘AI-resilient’ without revising learning outcomes, they risk certifying students based on assessments that do not measure the deep learning required for an AI-integrated workforce.
The Trouble with Outcomes
Research has highlighted key gaps in current learning outcomes (Weng et al., 2024; Zhang, 2024). Many outcomes remain focused on basic recall and content reproduction, failing to assess the deeper analytical and applied skills essential for today's graduates. This misalignment means that learning outcomes are not effectively measuring students' ability to engage with complex problems, critically evaluate information, or apply disciplinary knowledge in meaningful ways. Without a shift towards higher-order thinking, ethical reasoning, and real-world application, institutions risk maintaining outdated educational models that do not adequately prepare students for professional and societal challenges.
Rather than focusing solely on preventing AI misuse, institutions must reconsider what learning means in an AI-enhanced world. The TEQSA resource Gen AI Strategies for Australian Higher Education: Emerging practice (2024)highlights the need for learning outcomes that assess, not just what students know, but how they think, evaluate, and apply knowledge alongside AI. This shift ensures graduates develop skills that remain rigorous and adaptable, with AI as a tool—not a substitute—for deep learning.
International Perspectives: A Global Shift Toward AI-Integrated Learning Outcomes
Regulatory and accrediting bodies across the UK, US, and Europe are increasingly recognising the importance of revising learning outcomes in tandem with assessment reform. In the UK, the Quality Assurance Agency for Higher Education (QAA) (2024, p.13) has urged universities to redesign assessments to emphasise critical engagement and reflective reasoning, “while ensuring academic integrity is preserved and assessments remain fair and reflective of student learning”. The Higher Education Policy Institute (HEPI) (2025) reports that over 90% of students now incorporate AI into their studies, underscoring the need for learning outcomes that develop critical thinking rather than simply adapting assessments to AI's capabilities.
As UK institutions refine their approaches, similar discussions are shaping policy in the United States. The American Association of Colleges and Universities (AAC&U) (2024) is leading curricular reforms to integrate AI literacy, while EDUCAUSE (2024) emphasises the need for governance policies that support responsible AI use. Additionally, what’s left of the U.S. Department of Education (2024) calls for embedding AI responsibility, critical literacy, and compliance with academic standards into university learning outcomes.
European professional organisations are also embedding AI literacy into accreditation requirements to ensure that graduates develop skills for an AI-enhanced workforce. The European Consortium for Accreditation (ECA) (2024) has introduced the Qual-AI-ty Engagement project, leveraging AI-driven qualitative analysis to enhance student learning outcomes. Meanwhile, the Foundation for International Business Administration Accreditation (FIBAA) (2024) has announced that, from 2025, its accreditation criteria will explicitly incorporate AI and digitalisation, ensuring that higher education institutions address the ethical, technical, and professional implications of AI across a broad range of disciplines.
Collectively, these international frameworks signal a fundamental shift in educational philosophy regarding AI. Instead of defensive strategies aimed at containment, emerging policies position AI as a catalyst for educational transformation—equipping students with the technical fluency, contextual judgment, and ethical discernment needed to thrive in a technologically evolving professional landscape.
The Institutional Challenge: Higher Education Moves Slowly
Despite growing awareness of AI’s impact on education, universities struggle to keep pace with necessary reforms. The processes that govern curriculum design and accreditation lack the flexibility needed for rapid adaptation. This inertia is particularly evident in the slow progression of regulatory updates for AI integration.
Howard (2021) argues that structural reform is necessary for Australian higher education to remain responsive to contemporary challenges. Rigid frameworks and slow-moving policy cycles prevent universities from adapting swiftly to technological change, as committees and regulatory agencies navigate complex approval processes before new learning outcomes can be formally adopted.
Professional accreditation bodies are also struggling to keep up, highlighting the need for educational providers and the professions to work closely together to develop appropriate learning outcomes and advance good practice. For example, Engineers Australia (2025, p.3) recognises that “developing an engineer’s ability to use and adapt to AI and GenAI technologies is a shared responsibility. Industry, professional and peak bodies, the tertiary sector and an individual engineer’s own professional development must all play a part.”
To keep pace with AI, institutions must streamline policy approval pathways, establish AI governance task forces, and foster cross-institutional collaborations. Without these measures, graduates risk entering the workforce without the responsible AI competencies required for success in an increasingly automated world. Working with industry, universities must embed these competencies into learning outcomes now to ensure students can navigate an evolving professional landscape.
Embracing Parallel Progress: The Urgency of AI-Ready Education
Assessment reform is essential, but redesigning assessment without ensuring constructive alignment with learning outcomes is like fixing the roof while ignoring the foundation. It may offer temporary reinforcement, but the entire structure remains vulnerable to collapse. Universities that focus solely on AI-resistant assessment formats (if there is a such a thing) without rethinking learning outcomes risk producing graduates who can complete tasks under controlled conditions yet lack deeper understanding.
Higher education institutions must take a proactive approach to ensure that learning outcomes remain relevant and meaningful in an AI-driven world, beginning with an immediate institution-wide audit to identify learning outcomes that no longer reflect contemporary expectations.
The work has begun, but progress remains uneven. To remain relevant, higher education must go beyond safeguarding academic integrity and institutional reputation.
As AI reshapes the professional landscape, universities must urgently integrate AI literacy through inclusive, human-centred policies that uphold educators’ roles, address diverse learner needs, and ensure equitable access to tools and ethical practices. Only by aligning learning, teaching, and assessment with these principles can institutions ensure graduates are not just prepared for an AI-enhanced world — but are ready to shape its future.
Dr Prue Laidlaw, Sub-Dean Learning and Teaching, Faculty of Science and Health, Deputy Chair, Academic Senate, Charles Sturt University