There are now frequent warnings about misinformation posted on Facebook, Twitter and other social media networks. Novel research from Rensselaer Polytechnic Institute has illustrated that artificial intelligence can help in the formation of accurate news assessments, but only at the initial emergence of the news story.
These findings were published in Computers in Human Behavior Reports by an interdisciplinary team of Rensselaer researchers. They discovered that AI-based interventions are not generally effective when it comes to flagging issues with stories on frequently spoken topics about which people have established beliefs and inclinations, like vaccinations and climate change.
However, when a topic is newly emerging and people have not yet formed an opinion on it, customized AI-generated advice can direct readers to make better judgments on the legitimacy of news posts. This guidance provides logical reasoning in line with a person’s natural thought process, like an analysis of the accuracy of the facts that were provided or how reliable the news source is.
‘It is not enough to build a good tool that will accurately determine if a news story is fake’, said Dorit Nevo, Associate Professor in the Lally School of Management at Rensselaer and one of the lead authors of the paper. ‘People actually have to believe the explanation and advice the AI gives them, which is why we are looking at tailoring the advice to specific heuristics. If we can get to people early on when the story breaks and use specific rationales to explain why the AI is making the judgment, they’re more likely to accept the advice’.
The two-part study which began in the fourth quarter of 2019, involved nearly 800 participants. The coincidence of the COVID-19 pandemic offered the researchers an opportunity to collect real-time data on a major emerging news event.
‘Our work with coronavirus new shows that these findings have real-life implications for practitioners’, said Nevo. ‘If you want to stop fake news, start right away with messaging that is reasoned and direct. Don’t wait for opinions to form’.
This research paper, ‘Tailoring Heuristics and Timing AI Interventions for Supporting News Veracity Assessments’ is an instance of the New Polytechnic, the collaborative model that encourages partnership across various disciplines in education and research at Rensselaer. The team at Rensselaer included Lydia Manikonda, Assistant Professor at the Lally School of Management; Sibel Adali, Professor and Associate Dean in the School of Science; and Clare Arrington, a doctoral student of Computer Science. The other lead author was Benjamin D. Horne, a Rensselaer alumnus and Assistant Professor at the University of Tennessee.
By Marvellous Iwendi
Source: Rensselaer News