Disentangling the gen AI value chain: who are we really “collaborating” with?
Miriam Reynoldson, The University of Melbourne
In this response to previous posts by Kelly Matthews and Tim Fawns on humanising language around students’ use of AI, I’ll attempt to shed some light on the question: who are students collaborating with when they ‘collaborate’ with generative AI?
Humans have a psychological tendency to anthropomorphise non-human entities – that is, to ascribe human properties to things that are not human. We see faces in woodgrain, Jesus in burnt toast, lurking intruders in the silhouettes of hatracks.
Since a chatbot interface was bolted onto GPT-3 in late 2022, we have begun populating a new pantheon of machine learning deities. They have names, voices, and personality flaws and are ravenously hungry for sacrificial data to enable them to generate high-quality, convincing outputs. Now, we speculate on how we can work together with our new AI gods to create a new posthuman age.
All of this is having the effect of obscuring the agency of all those people involved upstream of the AI output itself. That is: the leaders of AI companies, the developers of GPTs and GANs, the product managers of particular AI models, the authors whose data get incorporated into training datasets, the sweatshop workers involved in coding those datasets. This isn’t an exhaustive list, but a few examples of humans whose humanity is diminished when we anthropomorphise AI technologies.
The gen AI value chain
A pause here to unpack that idea of “upstream” and its equally-relevant counterpart, “downstream”. These terms are drawn from the industrial concept of the “value chain”. Here’s an illustration, which will explain better than my words.
A simple value chain for a manufactured product
If we apply this concept to use of gen AI, the upstream activities include development and training of AI, and the downstream activities include disseminating the AI-generated content, reading, and perhaps citation or input of the generated content into another AI training dataset. In simple terms: where does it (AI) come from, and where does it go?
Disentangle, re-humanise
So Kelly Matthews is right when she declares that use of humanising language towards AI leads to dehumanisation of the people who work with it. But it’s not only dehumanising the students (and teachers, and everyone else) who use AI directly to generate rapid text copy. To my mind, the direct users of AI are perhaps the least at risk – though this is not to say they are not at risk, of course.
And it’s important to note, too, that “dehumanising” may actually be for some parties a desired outcome of this process. Like a limited liability company, the creators of AI technologies are able to cede responsibility at a certain point for the outputs of their creation if that creation is seen to have agency of its own.
Tim Fawns is also right when he suggests that working with AI is an “entanglement” rather than a collaboration, per se. When a student uses gen AI, they become entangled in a chain of contributions, some made unknowingly, which have led to the availability of the AI tool and will later lead to further impacts on downstream users of the generated output. But it’s not impossible to detangle the chain.
Ultimately, this is a conversation about the consequences of the words we use – fitting indeed for a discussion about text-generating algorithms that are designed to mimic the way we use words. And when we use concepts like “collaboration”, even metaphorically, to ascribe agency to algorithms, we rob our students of the ability to think critically about who and what they are really engaging with.
Miriam Reynoldson is a Doctor of Education candidate at the Faculty of Education, University of Melbourne. She is a digital learning specialist who works with Victorian universities to lead online program strategy and design. She also teaches educational design at Monash University.