A recent study reveals an unexpected twist: most people can’t tell whether a poem was written by a human or an AI — and even more surprisingly, they often prefer the AI-generated versions. Researchers Brian Porter and Edouard Machery from the University of Pittsburgh tested this theory, and their findings uncover something quite surprising about our preferences in poetry.
The study shows that AI-written poems frequently outperform human-created ones in terms of reader appeal. This discovery challenges traditional notions of creativity and raises intriguing questions about how we perceive and value art in the age of artificial intelligence.
You decide: Human or AI?
For their experiment, Porter and Machery recruited over 1,600 participants and asked them to read a selection of poetry. Each participant received ten poems: five written by iconic poets such as William Shakespeare, Emily Dickinson, Lord Byron, and T.S. Eliot, and five generated by ChatGPT-3.5, which mimicked the styles of these legendary figures. The participants had no knowledge of which poems were human-written and which were AI-generated.
The results were surprising. Readers were more likely to believe that the AI-generated poems were written by humans. In fact, the poems that participants were least likely to attribute to a human author often came from the very poets considered masters of the craft. This unexpected outcome challenges our assumptions about the uniqueness of human creativity, particularly in the realm of poetry.
Porter and Machery’s findings suggest that people may not be as attuned to the distinctions between human and AI-created poetry as we might think. The subtlety of the AI’s mimicry led many readers to mistake it for authentic human expression, highlighting how convincingly technology can replicate the qualities we associate with great poetry.
This experiment raises intriguing questions about our perceptions of artistic authorship and creativity. If we can’t easily tell the difference between poems written by humans and those generated by AI, it suggests that the line between human and machine creativity may be blurrier than we realize.
What makes a poem feel ‘real’?
Eager to explore further, the researchers conducted a second experiment with 696 new participants. This time, each reader rated poems based on factors like beauty, emotion, rhythm, and originality. The participants were divided into three groups: one group was told the poems were written by humans, another was told they were AI-generated, and the third group received no information at all about the authorship.
The results revealed an interesting pattern. When participants knew a poem was AI-generated, they tended to rate it lower in almost every category. However, the twist came when participants didn’t know the authorship: they often rated the AI-generated poems higher than those written by humans. This surprising outcome suggests that people might be unconsciously more open to the qualities of AI poetry when they aren’t influenced by the knowledge of its origin.
The experiment highlights how context and preconceived notions can shape our evaluations of art. When participants had no bias about the author, they seemed more focused on the qualities of the poem itself, rather than attributing it to a particular source, whether human or machine.
Porter and Machery’s findings challenge conventional ideas about what makes poetry “good” and suggest that our judgments may not always align with the traditional view of human creativity. In fact, the results raise important questions about how we value art in an era where AI can produce works that are indistinguishable from those created by humans.
So, why do we fall for AI poetry?
It turns out that AI poems often come across as more straightforward and easier to enjoy, while poems by famous poets can be more complex and challenging. Instead of appreciating the depth of a classic poet’s work, readers sometimes mistake complexity for confusion and move on. On the other hand, AI-generated poems deliver their messages in a style that feels familiar and easy to understand, making them more immediately enjoyable.
We tend to assume that we’ll prefer poetry written by humans, expecting it to have more emotional depth or artistic value. However, the simplicity and smoothness of AI’s style often make it more appealing at first glance. Readers may interpret this accessibility as a sign of human authorship, not realizing that the clarity and ease they appreciate actually come from the AI’s design.
The result is a kind of cognitive bias where we associate smooth, clear poetry with human creativity, even when the poem has been generated by a machine. This preference for straightforwardness might explain why AI poems can be more attractive to readers, especially those seeking a quick connection with the text.
In the end, the experiment shows that what we think of as “human” traits in poetry—like depth and emotional complexity—might be less important to us than the ease of understanding and enjoyment. AI’s ability to deliver poems in a more digestible form may make it more appealing, leading us to mistakenly attribute human-like qualities to something created by an algorithm.