Recent news has reported on the release of an artificial intelligence system that is able to generate realistic stories, poems and articles, as though they were written by a human hand. This system was made available for public use despite fears that it was ‘too dangerous’ because of its alarming potential to create abusive content, fake news and spam. But now the text-generator is making waves for the streamlining effect that it could have on the field of journalism. This story has provoked many to question whether the AI system, by removing the need for human intelligence to author news articles, leaves any room for human journalists in curating content for readers. It’s a catchy headline, indeed…except that AI journalism has already been happening for quite a while now, with no catastrophic effect on the field.
Yes, robots have been able to use data-driven content to create insightful narratives for some time. Most large financial institutions subscribe to the Bloomberg Professional to receive exactly that service, since the system inputs real-time data and outputs written news to inform professionals about the market. So, if these artificial systems already exist and are even in widespread use, why has this story now become a news headline?
In my opinion, it’s not so much this specific story itself that has caught the media’s attention, rather the ramifications it has upon society’s ability to categorize ‘literature’ as a polar opposite pursuit to the science of computer programming. The idea of a bot writing an article isn’t, as I’ve shown, anything too revolutionary, but what is considerably more groundbreaking is the conceivable opportunity it creates for an entirely new kind of writing that blurs the line between the two categories, literature and science.
Why does AI journalism bother us so much?
BBC News seems to have published the question at the heart of the current media interest in automated writing way back in 2018 in an article entitled: ‘Would you care if this feature had been written by a robot?’ In this article, Chris Baraniuk articulates the concerns that we instinctively feel about robot authors: the kind of concerns that immediately leap out at readers as the headline makes them ask themselves ‘do I care if I’m reading something written by a robot?’ and soon after ‘why do I care so much?’
My estimation about the answer to the latter of those two questions, perhaps coloured by my background as a literature student, is as follows. Simply put, the idea of robots being able to master creativity frightens people. Creativity is inextricably wedded to our associations of literature and literature itself has historically been associated with the Romantic idea of an author who has a distinctly human talent, individuality or style. If robots can write articles, what separates them from being able to write novels like James Joyce, sonnets like John Donne or plays like William Shakespeare? And if robots can learn to produce what are traditionally thought of as literary undertakings, then it seems that a level of creativity itself can be automized. The repercussions of this on the circulation and appreciation of literature, even on the entire field of literary criticism, are immense. The question of the ownership of this new ‘AI literature,’ which contains original works of expression but doesn’t possess the same kind of human authorship that we are accustomed to, that is creative in nature but has no readily discernable human intelligence behind it, is both a headache in legal terms and for literary criticism. The traditional way in which we think about copyright law, intellectual property or even generally about literature as an essentially human pursuit may need to be reconsidered, particularly with futurologist Professor Kevin Warwick predicting that robots could, in fact, imminently be able to write novels.
But don’t worry, robots can’t even make a decent salad
But the good news is this: whilst artificial intelligence may be able to replicate creativity, it will never be able to generate it entirely on its own. John Searle, in his paper which addresses the question ‘could a machine think?’ clarifies that AI cannot and will never be able to achieve a human-like consciousness. His “Chinese room” thought experiment is a useful way of grasping this point (and I highly recommend looking it up if you’re interested.) A more succinct expression that is apt in the context is a quote by British journalist Miles Kington: ‘Intelligence is knowing that a tomato is a fruit, not a vegetable. Wisdom is knowing not to put a tomato in a fruit salad.’ All current forms of artificial intelligence, whilst they possess the intelligence to manipulate information fed to them into an output, for example, that a tomato is a fruit, cannot use their own wisdom or original thought to judge as a conscious human being would. In other words, AI would probably commit the frankly embarrassing faux pas of putting a tomato in a fruit salad, since it cannot use its own wisdom.
So, we can never truly automate creativity, since creativity requires a level of consciousness that artificial intelligence will never possess. But even if this wasn’t the case, the very reason that literature is valuable to us (and why some of us have dedicated 4 years to studying it) is because it is a social practice of communication from one human to another. The reason that Joyce’s novels, Donne’s sonnets or Shakespeare’s plays are able to affect and move their readers to emotion is because they convey elements of the human experience that AI could never successfully recreate of its own accord.
By Victoria M