The spread of fake news online is a serious threat to society.
While humans have traditionally been the culprits, advancements in AI could soon allow malicious actors to create even more believable lies.
This is where “Grover” comes in. Grover is a unique AI model designed to combat fake news fabricated by other AI systems. The interesting twist, is that Grover itself, can generate fake news.
Grover was developed by a team of researchers from the University of Washington, specifically in the Paul G. Allen School of Computer Science and Engineering, and some affiliated with the Allen Institute for AI (AI2).
The research team said they have unveiled a surprising truth: the best way to identify AI-generated fake news is with another AI that can also generate it.
By understanding its own tendencies and those of similar models, Grover can effectively spot fake news articles, even when there are few examples available for comparison. In challenging tests, Grover achieved an impressive 92% accuracy in distinguishing human-written news from AI-generated content, researchers said.
How does it work?
While current methods can spot some AI-generated fake news (73% accuracy), a better solution was needed, researchers said.
Grover excels at both creating and detecting “neural fake news,” fabricated content by other AI systems. It employs robust verification techniques and discriminators to classify neural fake news from real, human-written. Grover proved to be the best defense against itself, underlining the importance of making powerful generators like Grover public.
“Our research delves deeper, exploring how AI models leave detectable traces (“exposure bias”) during content creation. Grover, and similar detectors, can exploit these weaknesses to identify fakes,” the team said. Researchers said they acknowledge the ethical considerations of this technology, and by releasing Grover publicly, they hope to pave the way for a future where AI helps combat, not create, fake news.