For how long can we shut AI out of official writing?
When taxi-hailing technology came to Kenya, there were incidents where some taxi drivers who had embraced the new way of finding customers were attacked. Needless to say, the attacks were suspected to have been organised by traditional operators. The new apps had begun turning tables on these legacy drivers who largely parked at known spots, and customers had to walk over to negotiate fares.
Like the conceited dog in Achebe’s Arrow of God that thought it could put out a forest fire with its puny fart, these traditional taxi drivers had lied to themselves that they could scare the new technology out of town. It’s unfathomable, in hindsight, but then again, isn’t it strange how deeply ingrained norms resist change? Something out of Isaac Newton’s First Law of Motion, where objects stay at rest or in motion unless acted upon by an external force. In the case of the taxi drivers, the external force was massive, and their business model had to drown.
Could the same thing eventually happen in writing? Today, if you check literary journals around the world, you will find that stories written using AI technologies such as ChatGPT are highly frowned upon. Most publications explicitly state that any AI-generated submissions lead to automatic rejection and the blacklisting of the author.
That has had me thinking lately. When you call for submissions — whether they be short stories, full-length manuscripts, or journal articles — do you call for quality writing or a certain way of writing? Are we likely to soon see the emergence of literary publications that accept AI-generated poems, stories and full-length manuscripts? Or better still, are we soon likely to be treated to artistically sumptuous stories and books generated using new technologies?
Think about it. What is technology? It is just a way of doing things. People abandon old ways when better, faster and more efficient methods emerge. Is writing really an exception? Have we always written the way we do? Barely three decades ago, newsrooms in Kenya still had typewriters. By the mid-2000s, there were none, although a couple of people still thumped their computer keypads hard enough to remind you of typewriters. By the time we joined the newsroom, we revised the all-important newspaper front page as a printout and in red ink, then made corrections on-screen. This was years after the arrival of desktops. We still could not trust ourselves to do it fully on-screen. Today, we do everything on-screen.
Even writing itself has been in a continuous state of flux. In prehistoric times, stories, knowledge and cultural values were passed down verbally from generation to generation. Then, around 3400–3200 BCE, in ancient Mesopotamia, writing first appeared in the form of cuneiform, followed by Egyptian hieroglyphs around 3100 BCE.
These early systems used symbols to represent language, allowing people to record information on materials such as clay tablets or papyrus among Egyptians around 2500 BCE. By the first century BCE, scrolls made from papyrus and parchment were used extensively in the ancient Mediterranean, including the famous Dead Sea Scrolls, written between 150 BCE and 70 CE, which preserved religious texts of the Jewish community. Fast forward to 1440, when Johannes Gutenberg invented the movable-type printing press, which enabled the mass production of books, making written works more accessible to the public.
It was not until 1860 that the typewriter was invented, followed by the personal computer in the 1970s. Today, most journals and publishers of literary works have embraced computers as a way of creating text. Yet, slightly over 20 years ago, you could not have submitted your literary work in soft copy — you had to have it typewritten.
Later, literary editors in Nairobi and the region advised you to send in a soft copy but ensure you retained the original. In short, we have come a long way, both in terms of writing technology and what is acceptable as good writing. Today, AI-powered tools are at the forefront of writing technologies. Over the past decade, AI has been integrated into writing platforms, offering suggestions on everything from plot to character development for novels, plays and short stories. It generates content that is faster, more efficient, and more accessible than ever before.
Make no mistake, though — I’m not advocating the violation of institutional rules. Personally, I find it more magical to write, rather than let some bot churn out its contrived paragraphs for me. That said, if a publisher is dead-set against the use of AI, it is only fair and proper that submissions adhere to that requirement.
The question, however, is, what if — as AI technology develops further — we get to a point where better stories, better novels, better biographies and other works become known to humanity courtesy of AI? What then? To put it in perspective: would I stop reading the works of a great writer — like the Jack Reacher series by Lee Child, which I can’t get enough of — if I discovered that he or she uses AI? What is more important: how the story is written or the greatness of the story itself?
It calls to mind the old debate about whether a work of literature is great because of how it is written or the events it explores in the real world. This debate dates back to the early 20th century, when New Criticism, a formalist approach to literary studies, emerged. New Criticism maintains that when reading a novel or any other literary work, one should focus on a close reading of the intrinsic features of the text, such as its structure, language and symbolism, rather than its historical or biographical contexts. According to John Crowe Ransom, I.A. Richards, Cleanth Brooks, and other proponents of this rather prescriptive way of enjoying books, you don’t need to know the author or the historical events that motivated them. A literary work, they argued, is a self-contained object of analysis. They even rejected reader response, advocating instead for a rigorous examination of the text itself to uncover deeper meanings through its formal elements.
So, the question begs: who sets the rules on how people must write or read books? Isn’t reading a magical moment of mental intercourse with a written work that no one can pretend to begin to police? From the days of the Egyptian cuneiform era to date, writing is a process that takes place in the mind, not in the tools used to write.
If one is able to organise their thoughts better, craft more credible characters, create captivating narrative arcs, and craft sentences that melt mountains, what does AI have to do with it? Would the algorithms in AI ever be able to craft such magical words on their own, without being chaperoned by a highly creative mind? If AI were to help us produce more unputdownable literary works — wouldn’t the world be better off for it?
For what do we really want: great books or the small assuring thought at the back of your mind that a human, not a robot, produced them? So, for how long can we realistically shut AI and other technologies out of mainstream writing? Or are we, like the Nairobi taxi drivers of yore waiting for the tide to wash away our aversion to change.
The writer is an editorial and publishing consultant. Contact: [email protected]