This is a response to last week’s post by Kelly Matthews: ‘Humanising the machines’ is not the answer (or a plan) for AI in higher education. Kelly argued that some of our language implies that we are treating AI technologies as human and, thereby, dehumanising people.
I agree that this is a risk. Putting humans and AI in the same category makes us susceptible to a language of reducing humans to machines and the “robotic pursuit of efficiency”. For example, I can see that positioning teachers as vulnerable to replacement or being kept “out of the loop” might devalue their human qualities.
Homer Simpson’s workplace in which Homer has been replaced by a brick tied to a lever.*
[Alt text: A cartoon drawing of Homer Simpson’s desk at the nuclear factory with lots of dials and buttons. Homer’s chair is empty and there is a brick tied to a lever]
At the same time, I also want to resist instrumental understandings of AI as a tool that is simply used or not used by humans. And I want to resist technological determinist views in which AI drives social change (e.g. by dehumanising education). AI is not disrupting or revolutionising education. We – people and technologies, in combination – are reshaping it, just as we always have done, through contextualised and diverse activity. The qualities and functions of AI technologies are important, but so is the context, and the values, goals, actions, and language of people (as Kelly argues!).
I can see how, through a students-as-partners lens, thinking of AI in terms of partnership is problematic. Of course, we don’t need to think of these technology interactions as the same kind of partnership. The value of the partnership metaphor is in what it allows us to see more clearly, and the harm is in what it obscures. I suppose this is where Kelly sees the danger: that we take our metaphors literally or, at least, more seriously than we should.
Kelly’s post raises an interesting question for me: does talk of partnership or collaborating with AI imply that AI has human qualities? I want to defend a particular position on collaboration, at least, with non-human, material objects and technologies, as a way of navigating the middle ground between AI as tool and AI as independent social force. In this collaboration, AI and humans are very different, but not independent. Further, it is not a stable collaboration of one individual human + one individual AI technology, but an uncertain, evolving collaboration of, potentially, multiple people, technologies and other things. Perhaps this instability is a reason to avoid the term partnership. If collaboration sounds too positive and harmonious, perhaps we can think of an ongoing negotiation or an entanglement?
Here, I am expanding from Kelly’s focus on AI to a broader focus on technologies and material elements that we use in thinking and doing. Our possibilities for action are enabled and constrained by things around us. I doubt that Kelly will disagree with this, and I think it resonates with a conversation we had recently about ideas from Indigenous and also sociomaterial scholarship. However, I worry that if we see collaboration with technologies as anthropomorphising (attributing human characteristics to an object), we are restricting agency to humans and, in so doing, portraying technologies as inert and subject to our control.
Let’s take a short tangent. I am wary of uses of the phrase “human learning” to signify learning that is independent of AI or other technologies (learning with AI is also human learning). Technology is everywhere, it is how we organise our environments and social configurations. We like to think that we can target “what the student knows” by separating them from technologies, other people, and resources (e.g. via invigilated exams). However, the exam itself is a technology, as are pens and paper, grading systems, chairs, rooms, clothing, etc. In an exam, students demonstrate knowledge, not “by themselves”, but in heavily constrained conditions. The conditions are, ultimately, more about invigilation than cognitive purity or what kinds of knowledge and thinking are legitimate in relation to some future context (why are pens and paper allowed, why does an MCQ involve choosing from a set of pre-given options, why is the box for a free-text answer 5 cm high?). The challenge for educational institutions is not to separate out “human” learning from learning that involves technologies and additional assistive resources (or other humans!). Rather, it is to understand more about what is learned through different forms of engagement with the world, and how to encourage appropriate, targeted forms of learning through structured constraints and conditions.
Thanks to Kelly’s post, I will think more about these terms and the potential for humanising AI and dehumanising people. But, for now, I think I want to hold onto metaphors of collaboration, along with negotiation and entanglement. For me, they imply, not a mechanistic, dehumanising relationship, but a dynamic, effortful and cautious coupling, in which the output or outcome is more than the sum of human and machine parts. Kelly and I are both interested in continuing a scholarly discussion of these issues and would love it if you were to add your thoughts in the comments on this post!
*We have used this image of a popular meme, which may be subject to copyright. In doing so, we made a judgement call that this is no worse than having AI produce an image for us from a large range of potentially copyright images.
Associate Professor Tim Fawns, Monash Education Academy, Monash University