by David Beer Could there be anything more insulting for a writer than someone assuming that their writing is an output of generative artificial intelligence? The mere possibility of being confused for a neural network is enough to make any creative shudder. When it happens, and it will happen, it will inevitably sting. By implication, being mistaken for AI is to be told that your writing is so basic, so predictable, so formulaic, so replicable, so obvious, so neat, so staid, so emotionless, so stylised, so unsupervised, that it is indistinguishable from the writing of a replication machine. Your writing, such a slur will tell you, lacks enough humanity for it to be thought of as being human. The last thing any writer needed was another possible put-down for their work. Over a decade ago, in the 2012 book How We Think, Katherine Hayles’ concluded that being immersed within and operating alongside advancing networked media structures, with changing cognitive abilities, changes thinking itself. This shift in how we think inevitably has implications for how we write too. Beyond this, there is a new pressure now. As we interface with it, AI will not just directly change what and how we know, but will also impact on how we anticipate being judged in comparison to those generative systems. With the fear of the insult and the anxiety of comparison in the background, the objective of writing may be to avoid the threat of someone wondering, if only for a moment, if you had simply typed a prompt into your preferred app and then comfortably reclined to stream a TV show. Will we now push ourselves to write in a style that means we can’t possibly be confused for AI? Might we try to sound more human, more distinct, more fleshy, and therefore less algorithmic. As we adapt our writing in response to the presence of AI, we will enter into a version of what Rosie DuBrin and Ashley Gorham have called ‘algorithmic interpellation’. That is to say that when we are incorporated into algorithmic structures, even acts of resistance are defined and directed by those very circumstances. What AI writing looks like will become the thing to avoid replicating, meaning that the form AI takes will also define attempts at its opposition. At our keyboards, imaging our readers’ future assessments of what we are about to type, we are likely to create differently. The fear of being considered a bot, will drive the way we address our creative lives. This brand new fear could force a writer to twist themselves and their work into forms that couldn’t conceivably be thought of as being the musings of a network of layered GPUs. A writer may feel it necessary to express greater depth than deep-learning, they may feel the pressure to have more hidden layers than those neural networks, and so they start to write in ways that collected computation couldn’t anticipate. What might such writing look like? It’s not that this tendency will make us more human, but that it will push a performance of an exaggerated version of what we take human-likeness to be. If generative AI does advance further in the future – which is not as certain as it might seem, given the disputes over training data access and the perpetual problem of energy consumption and computational power – then even the most distinctive writers are likely to face the curse. Glancing across sentences, it will enter everyone’s mind that the prose appears, possibly, like it could have been rapidly assembled into an order of most probable sequence by a calculating machine. We will find ourselves constantly asking how human the assortment and layout of words seem. We will sometimes find ourselves concluding that a passage of text is just not human enough to be the product of a living being. Seeking to be deeper than deep learning, maybe an abyss will be excavated in the form of extra pre-adverbials. Human indulgences and eccentricities might present themselves as one option for being less computational. Maybe we will seek something evermore poetic and intertextual, with us layering in meanings that no computer could feel. Or being so radically non-formulaic as to be unrepeatable (and probably also unreadable) in style. Could AI have written Jacques Derrida’s unconventionally formatted 1974 book Glas I wonder? The writing may, alternatively, take rambling scenic routes to avoid anything that looks like an established or evenly recently trampled pathway. It is possible that none of this will work and the practice of writing will be halted by the sheer fear of the AI insult. Every time a sentence is scribbled that looks too AI-like, it will be deleted before it even has a chance of finding its way into a full paragraph. This type of second guessing, with the writer reflecting on what AI creativity looks like so that they don’t accidently produce something resembling it, is very likely to be a barrier to creative action of any type – whether that is writing, or art, design, photography, filmmaking or music. Of course the other obvious option is simply to use an AI agent to write for us, which, as well as making us highly efficient and productive beings, also insulates us from the insult, as it isn’t actually our writing being called AI-ish. Wherever we turn, the AI imitation of humanness might follow. In a recent interview with Alex Clark for The Observer, the novelist Colm Tóibín was asked about the whether AI was going to impact writers, he responded provocatively: ‘It’s going to be the end of us all. And maybe that’s good. In other words, it’s very clear that this idea of sensibility, which we go on about a lot – “no machine could ever replace my sensibility, which is so rich and varied, complex, and arising from my experience and from history” – that’s all rubbish. You can actually manufacture that.’ There is, he concludes, no such thing as a human sensibility, or subjectivity even, that can’t be manufactured. Concerned with the answer, Clark pushed him on this point, wondering if there will be a ‘little something that distinguishes’ machine from human writing. Given the option to temper his point, Tóibín instead responded simply that ‘no, that little thing doesn’t exist.’ He pictures a more difficult future for writers in which ‘the more material they put into the machines, the more the machines will just learn about what sentences sound like, what rhythm is like.’ Has the problem of creativity we face today been stated anymore starkly than this. The issue, if Tóibín is correct, is indistinguishability. That quality of humanness can, he imagines, be synthesized. Now, in some ways Tóibín’s answers could be seen to echo a type of hype around AI and its ongoing progressive and inevitable future advancement. Such a position would suggest that provided with sufficient training data, machine learning and other AI will be evermore capable of replicating human cognition, or at least they will get better at imitating the presence of human thinking. This is to imagine the advancement of AI as being mimetic. Yet, such progress is not a given. And nor is it the case that the future direction of AI will be concentrated on trying to imitate human forms of knowing and communication. It may well develop into alternative and divergent modes of as yet unimagined cognition. Yet Tóibín’s assertion perhaps acts more as a warning than a prediction. The implied caution is that it is futile for us to try to find that ‘little something’, it will remain elusive and unprotected, and to seek it out is likely to lead us to contort our writing so as to differentiate it from what we see AI producing. Whether we use AI or not, all writing is likely to be defined by the mere fact that it exists and that ideas about what it might come to do are spectres hanging over all creative acts. *** Enjoying the content on 3QD? Help keep us going by donating now.