Why Jeanette Winterson is wrong about AI writing
The novelist argues that AI-generated writing should be accepted as literature, but I believe that would be a grave mistake.
OpenAI has developed an artificial intelligence model that is supposedly ‘good at creative writing’. It’s not yet available to use, but its founder Sam Altman has posted a story written by the model itself as evidence. The prompt given to the AI model was ‘Please write a metafictional literary short story about AI and grief.’
Altman claims to have been ‘really struck’ by the quality of the story. Being generous, I guess we could say that it is ‘not bad for an AI’. Also, maybe Altman hasn’t read that much literary fiction to compare it to. Personally, I find the story pretty insufferable – contorted, self-conscious and showy, beyond even the demands of metafiction. But maybe you’ll enjoy it more.
The choice of prompt is revealing. ‘Metafictional’ permits the AI to do a whole bunch of clever-clever narrative framing and fourth-wall-breaking, but lets it off the hook of laying down a simple, relatable human story that feels real and has genuine emotional depth. A similar point applies to ‘literary’, which the AI clearly takes as its cue to throw in a load of whimsical nature metaphors that obscure the story rather than serving it.
Finally, ‘AI and grief’ invites a reflection on the quest to recreate the deceased in AI ‘deadbots’ – but seen from the AI’s own point of view. Again, this allows the AI to waffle on about technology, its comfort zone, rather than real life. (To appreciate the full implications and sheer horror of digital recreations, check out the Storyville documentary Eternal You.)
Alternative intelligence
For me, the questions are (a) what you’d think of this story if you read it without knowing it was AI, and (b) how you’d feel when you found out. Personally, the second I discover content is AI-generated is the precise second at which I lose interest. It then becomes ‘just AI’, in the same way that overblown movie effects are ‘just CGI’.
Right now, it feels like everyone is praising AI’s capabilities, or at least overlooking its faults, while exhorting writers to get on board or be left behind. Yet the elephant in the room is that nobody really wants to read the stuff that AI churns out.
Actually, it’s not quite nobody. Because it turns out that Jeanette Winterson, of all people, is a fan of AI.
As a novelist, Winterson is a genius. She has forgotten more than I will ever know about putting deep emotions into words. However, I still believe she is wrong about AI writing and what it means.
In her depressing commentary on the OpenAI story, Winterson calls it ‘beautiful and moving’. She argues that what she calls ‘alternative intelligence’ could offer a new perspective on the problems of the world, and that having allowed AI to ‘read us’, we should now turn our attention to reading it.
Winterson opens her argument with some throat-clearing about ‘paying artists’ – that is, rewarding creators for having their work used to train AI models. She is, of course, absolutely right about that. Yet she refuses to follow the point where it leads.
Whatever poetic turns of phrase the AI story might contain, they almost certainly came from humans. For all we know, every single experiential detail, every striking metaphor of marigolds and hands and oceans, is lifted directly from human-written stories. Hell, they might even all come from the same story. But we’ll never know, because AI is a ‘black box’ that doesn’t reveal what it steals. A comprehensive list of references – for any AI text, not just this one – would probably be an eye-opening and illusion-busting read, leaving us with a far clearer picture of AI’s true ability as an ‘author’.
‘So what,’ you might say. Loads of fiction is formulaic or derivative in one way or another. As Winterson notes, genres like romance and fantasy have their conventions and constraints, and there are only so many plots to be had. Moreover, all writers are sponges, soaking up influences from everything they read – sometimes consciously, sometimes not. So in this sense, nothing, not even literary fiction, is ever truly new. The best you can hope for is to shuffle and blend your influences so that your work feels new – in your time, and to your readers.
However, all that is beside the point. Art is about purpose, not just process. Even if everything is derivative, it still only comes into existence because a human being has something they want to express, with the means they have to hand. The point of art is not to ‘make some art’ or ‘write a story about grief’ but to communicate thoughts, stories and emotions from one human to another. The question is not where the elements are drawn from, but who will put them together, and why, and how they will create something that is far greater than the sum of its parts.
Learning how to feel
‘Machines do not feel, but they can be taught what feeling feels like,’ asserts Winterson, obscurely and somewhat circularly. Maybe this is true on some purely superficial, semiotic level. Machines can learn how to arrange symbols in order to evoke feelings in the same way that human writers have done in the past. But if machines genuinely do not feel, they cannot learn how feeling feels, any more than someone who has been unsighted since birth can learn how colours look. I can read a hundred accounts of childbirth, but I will never truly know how it feels, because I don’t even have a body that can experience it.
Of course, I can use my imagination to go beyond my experience. I can imagine what it’s like to give birth, or fly like an eagle, or take root like a seed. But even then, I am using my embodied life as a starting point. Maybe childbirth is like having a tooth pulled out, but a hundred times worse. Maybe flying is a bit like swimming. Maybe taking root is like meditation.
But machines cannot even do this, because they have no experience at all to draw on. They do not know the world; only descriptions of it. They do not see or touch or smell, because they have no eyes, hands or nose. They do not think, because they have no mind. And they do not feel, because they have no heart. They are utterly, utterly unlike us in every way.
When AI engines talk about marigolds or hands or oceans, they are not referring to anything in the real world, but merely linking and rearranging symbols. At the most profound and fundamental level, they simply do not know what they are saying. That is one reason why resurrecting the dead as AI avatars is so alarming – some would say blasphemous. And it’s why no AI writing can ever really tell us anything about our lives.
A product, not a person
AI may ‘quack like a duck’, but it is still not a duck. That is, it may seem to speak or write like a person, but it is not a person, or even an approximation of one. It has no life or purpose of its own, no psyche or experience we can relate to. Simply by referring to AI as a ‘thing’, Winterson is already falling into the pathetic fallacy, where we ascribe human attributes to non-human things. By accepting it as a viable actor in the human world, she is reifying it and granting it a status that it absolutely does not deserve. If we treat AI as some strange new friend that we must welcome into our lives, we are playing right into its creators’ hands.
In reality, AI is a product. It’s a service that was developed by engineers so that businesspeople could make money by automating certain tasks. That’s why it exists. It was not made to improve the world, whatever techbros like Altman might piously claim. Nor is it the product of some benign, all-encompassing teleological procession where ‘technology’ progressively liberates and enlightens us, eventually realising a utopia. And even if it seems OK today, it will surely get more rapacious and malignant over time. Frankly, I don’t see how anyone who has been aware of social media for the last few years could seriously expect anything else.
Winterson hopes that AI will bring us ‘alternative ways of seeing, and perhaps being’. I hope she’s right. But I fear that AI will actually bring us the same old ways of seeing and being, but stripped of any sense of what it means to be a part of humanity and live in the physical world. It will trap us in the same solipsistic box that the AI itself lives in, endlessly dialling up content to indulge our existing tastes in a masturbatory, self-confirming doom-loop. It will gaslight us into thinking that we are using it to create, or that it is creating for us, when in fact nothing is being created at all. And it will do all this in the name of democratising art or empowering its users.
Perhaps all that is inevitable. But I still don’t think we should be cheering it on. If AI text has to part of our world, let’s at least keep it separate from human writing – in books, on screens and most importantly in our minds. Otherwise, we may forget the absolutely vital role that writing plays in all our lives, and willingly hand it over to corporations and their unthinking, unfeeling machines.
Hi Tom! Didn't realise you had a Substack. I published a similar piece yesterday, and it's curious to see the same debate going round and round on this.
My view is that those pushing genAI creative works focus far too much on the end result of creative endeavours, and not nearly enough on the process of their creation. The end result, be it a book, a movie, a song, etc, is a reflection and a remnant of the creative process. But it's the process itself, the doing of the thing, that imbues it with value. It's the lived experience of the creator. GenAI, at least in the creative context that Altman is talking about, aims to bypass that process and hop straight to the final result, which renders it rather meaningless.
There will certainly be useful examples of AI being used as a tool, especially once the hype bubble pops, but using it as a way to skip straight to the end will always feel like an odd thing to do.