How AI-generated content is upping the workload for Wikipedia editors

As AI-generated slop takes over increasing swathes of the user-generated Internet thanks to the rise of large language models (LLMs) like OpenAI’s GPT, spare a thought for Wikipedia editors. In addition to their usual job of grubbing out bad human edits, they’re having to spend an increasing proportion of their time trying to weed out […]
© 2024 TechCrunch. All rights reserved. For personal use only.

As AI-generated slop takes over increasing swathes of the user-generated Internet thanks to the rise of large language models (LLMs) like OpenAI’s GPT, spare a thought for Wikipedia editors. In addition to their usual job of grubbing out bad human edits, they’re having to spend an increasing proportion of their time trying to weed out AI filler.

404 Media has talked to Ilyas Lebleu, an editor at the crowdsourced encyclopedia, who was involved in founding the “WikiProject AI Cleanup” project. The group is trying to come up with best practices to detect machine-generated contributions. (And no, before you ask, AI is useless for this.)

A particular problem with AI-generated content in this context is that it’s almost always improperly sourced. The ability of LLMs to instantly produce reams of plausible-sounding text has even led to whole fake entries being uploaded in a bid to sneak hoaxes past Wikipedia’s human experts.

 


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *