A new study examines whether AI could be an automated helpmeet in creative tasks, with mixed results: It appeared to help less naturally creative people write more original short stories — but dampened the creativity of the group as a whole. It’s a trade-off that may be increasingly common as AI tools impinge on creative […]
© 2024 TechCrunch. All rights reserved. For personal use only.
A new study examines whether AI could be an automated helpmeet in creative tasks, with mixed results: It appeared to help less naturally creative people write more original short stories — but dampened the creativity of the group as a whole. It’s a trade-off that may be increasingly common as AI tools impinge on creative endeavors.
The study is from researchers Anil Doshi and Oliver Hauser at University College London and University of Exeter, respectively, and was published in Science Advances. And while it’s necessarily limited due to its focus on short stories, it seems to confirm the feeling many have expressed: that AI can be helpful but ultimately offers nothing truly new in creative endeavors.
“Our study represents an early view on a very big question on how large language models and generative AI more generally will affect human activities, including creativity,” Hauser told TechCrunch in an email. “While there is huge potential (and, no doubt, huge hype) for this technology to have big impacts in media and creativity more generally, it will be important that AI is actually being evaluated rigorously — rather than just implemented widely, under the assumption that it will have positive outcomes.”
The experiment had hundreds of people write very short stories (eight sentences or so), on any topic but suitable for a broad audience. One group just wrote; a second group was given the opportunity to consult GPT-4 for a single story idea with a few sentences (they could use as much or as little as they liked); a third could get up to five such story starters.
Image Credits: Hauser, Joshi
Once the stories were written, they were evaluated by both their own writers and a second group that knew nothing about the generative AI twist. These people rated the stories on novelty, usefulness (i.e. likelihood of publishing) and emotional enjoyment.
Prior to writing the stories, the participants also completed a word-production task that acts as a proxy for creativity. It’s a concept that can’t be directly measured, but in this case one’s creativity in writing can at least be approximated (without judgment!; not everyone is a born or practiced writer).
“Capturing something so rich and complex as creativity with any measure seems fraught with complications,” wrote Hauser. “There is, however, a rich set of research around human creativity and there is a live debate about how best to capture the idea of creativity in a measure.”
They said their approach was widely used in academia and well documented in other studies.
What the researchers found was that people with lower creativity metrics scored lowest on evaluations of their stories, which arguably validates the approach. They also saw the largest gains when given the opportunity to use a generated story idea (which, it’s worth noting, the vast majority across the experiment did).
Stories by people with a low creativity score who just wrote were reliably rated lower than others on writing quality, enjoyability and novelty. Given one AI-generated idea, they scored higher on every metric. Given the choice of five, they scored even higher.
It really appears that for folks struggling with the creative side of writing (at least within this context and definition), the AI helper is genuinely improving the quality of their work. This probably resonates with many to whom writing does not come naturally, and a language model saying “hey, try this” is the prompt they need to finish a paragraph or start a new chapter.
Image Credits: Hauser, Joshi
But what about the people who scored highly on the creativity metric? Did their writing climb to new heights? Sadly, no. In fact, those participants saw little to no benefit at all, or even (though it’s very close and arguably not significant) worse ratings. It seems that those on the creative side produced their best work when they had no AI help at all.
One can imagine any number of reasons why this might be the case, but the numbers do suggest that, in this situation, AI had a zero to negative effect on writers with innate creativity.
But that’s not the part that the researchers were worried about.
Beyond the subjective evaluation of stories by participants, the researchers conducted some analyses of their own. They used OpenAI’s embeddings API to rate how similar each story was to the other stories in its category (i.e. human-only, one AI option, or five AI options).
They found that access to generative AI caused the resulting stories to be closer to the average for their category. In other words, they were more similar and less varied as a group. The total difference was in the 9% to 10% range, so it’s not like the stories were all clones of one another. And who knows, but this similarity might be an artifact of less practiced writers finishing a suggested story versus more creative writers coming up with one from scratch.
The finding was nevertheless enough to warrant a cautionary note in the conclusions, which I could not condense and so quote in full:
While these results point to an increase in individual creativity, there is risk of losing collective novelty. In general equilibrium, an interesting question is whether the stories enhanced and inspired by AI will be able to create sufficient variation in the outputs they lead to. Specifically, if the publishing (and self-publishing) industry were to embrace more generative AI-inspired stories, our findings suggest that the produced stories would become less unique in aggregate and more similar to each other. This downward spiral shows parallels to an emerging social dilemma: If individual writers find out that their generative AI-inspired writing is evaluated as more creative, they have an incentive to use generative AI more in the future, but by doing so, the collective novelty of stories may be reduced further. In short, our results suggest that despite the enhancement effect that generative AI had on individual creativity, there may be a cautionary note if generative AI were adopted more widely for creative tasks.
It echoes the fear in visual art and in web content that if the AI leads to more AI, and what it trains on is just more of itself, it could end up in a self-perpetuating cycle of blandness. As generative AI begins to creep into every medium, it is studies like these that act as counterweights to claims of unbounded creativity or new eras of AI-generated films and songs.
Hauser and Doshi acknowledge that their work is just the beginning — the field is brand new, and every study, including their own, is limited.
“There are a number of paths that we expect future research to pick up on. For instance, implementation of generative AI ‘in the wild’ will look very different than our controlled setting,” Hauser wrote. “Ideally, our study helps guide both the technology and how we interact with it to ensure continued diversity of creative ideas, whether it is in writing, or art, or music.”
Leave a Reply