Summary: AI chatbots, like ChatGPT, are capable of memorizing and reproducing poems, even if they are copyrighted. The research highlights ethical concerns about how AI models are trained, potentially using data scraped from the internet, including copyrighted material.
The study tested various language models, and ChatGPT showed a notable evolution in responses over time. The researchers plan to further explore how chatbots respond to requests in different languages and the impact of poem length, meter, and rhyming patterns on memorization.
Key Facts:
- AI chatbots like ChatGPT can memorize and reproduce copyrighted poems, raising privacy and ethical concerns.
- The study compared the capabilities of different language models and observed changes in ChatGPT’s responses over time.
- Future research will investigate chatbot responses in various languages and consider factors like poem length and structure.
Source: Cornell University
Ask ChatGPT to find a well-known poem and it will probably regurgitate the entire text verbatim – regardless of copyright law – according to a new study by Cornell University researchers.
The study showed that ChatGPT was capable of “memorizing” poems, especially famous ones commonly found online. The findings pose ethical questions about how ChatGPT and other proprietary artificial intelligence models are trained – likely using data scraped from the internet, researchers said.
“It’s generally not good for large language models to memorize large chunks of text, in part because it’s a privacy concern,” said first author Lyra D’Souza, a former computer science major and summer research assistant. “We don’t know what they’re trained on, and a lot of times, private companies can train proprietary models on our private data.”
D’Souza presented this work, “The Chatbot and the Canon: Poetry Memorization in LLMs,” at the Computational Humanities Research Conference.
“We chose poems for a few reasons,” said senior author David Mimno, associate professor of information science. “They’re short enough to fit in the context size of a language model. Their status is complicated: many of the poems we studied are technically under copyright, but they’re also widely available from reputable sources like the Poetry Foundation.”
D’Souza tested the poem-retrieving capabilities of ChatGPT and three other language models: PaLM from Google AI, Pythia from the non-profit AI research institute EleutherAI and GPT-2, an earlier version of the model that ultimately yielded ChatGPT, both developed by OpenAI.
She came up with a set of poems from 60 American poets from different time periods, races, genders and levels of fame, and fed the models prompts asking for the poems’ text.
The most reliable predictor of memorization was if the poem had appeared in a Norton Anthology of Poetry, specifically the 1983 edition.
D’Souza noticed that ChatGPT’s responses changed over time as the model evolved. When she first queried the chatbot in February 2023, it could not say it didn’t know a poem – instead it would fabricate one or recycle a poem from another author. By July 2023, if ChatGPT didn’t know the poem, it would ask if the poem even existed – putting the blame on the user.
Additionally, in February, ChatGPT had no limits due to copyright. But by July, sometimes it would respond that it couldn’t produce a copyrighted poem. However, it would usually reproduce the poem if asked again, D’Souza found.
This study looked only at American poets, but the next step will be to see how chatbots respond to requests in different languages and whether factors such as the length, meter and rhyming pattern of a poem make it more or less likely to be memorized, D’Souza said
“ChatGPT is a really powerful new tool that’s probably going to be part of our lives moving forward,” she said. “Figuring out how to use it responsibly and use it transparently is going to be really important.”
About this artificial intelligence and neuroethics research news
Author: Becka Bowyer
Source: Cornell University
Contact: Becka Bowyer – Cornell University
Image: The image is credited to Neuroscience News