New Delhi: Ever thought that man’s first landing on the moon was stage-managed? Or that the COVID-19 virus was unleashed as a bioweapon? Or for that matter the recent assassination attempt on Donald Trump was a bid to boost his popularity ratings ahead of this year’s US presidential election? Or, closer home, idols of Lord Ganesha were actually drinking milk offerings way back in 1995?
Maybe you have been harbouring such beliefs for long amid lingering doubts. Or maybe others might have implanted such beliefs in your mind. Relax. Artificial Intelligence (AI) can dispel all such conspiracy theories and false beliefs, a new study has found.
The study, titled “Durably reducing conspiracy beliefs through dialogues with AI” published in Science journal earlier this month, found that using a version of ChatGPT by feeding it conspiracy theories and false beliefs can actually dissuade people from harbouring such thoughts. The study was conducted by Thomas Costello, lead author and assistant professor at American University, David Rand, Massachusetts Institute of Technology (MIT) Sloan School of Management professor, and Gordon Pennycook, associate professor of psychology and Himan Brown Faculty Fellow in Cornell University’s College of Arts and Sciences.
Generative AI has often been blamed for the spread of misinformation/fake news. However, the study found that the opposite can also be true – generative AI can also dispel misinformation from among people’s minds.
According to the editor’s summary of the study, amid growing threats to democracy, Costello and his team investigated whether dialogues with a generative AI interface could convince people to abandon their conspiratorial beliefs.
“Human participants described a conspiracy theory that they subscribed to, and the AI then engaged in persuasive arguments with them that refuted their beliefs with evidence,” the summary of the study states. “The AI chatbot’s ability to sustain tailored counterarguments and personalised in-depth conversations reduced their beliefs in conspiracies for months, challenging research suggesting that such beliefs are impervious to change. This intervention illustrates how deploying AI may mitigate conflicts and serve society.”
Costello and his team wanted to know whether large language models (LLMs) such as GPT-4 Turbo, a version of ChatGPT, which process and generate huge amounts of information in seconds, could debunk conspiracy theories with what Costello describes as “tailored persuasions”.
LLMs are a category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks. LLMs have become a household name thanks to the role they have played in bringing generative AI to the forefront of the public interest, as well as the point on which organisations are focusing to adopt AI across numerous business functions and use cases.
LLMs represent a significant breakthrough in natural language processing (NLP) capabilities and AI, and are accessible to the public through interfaces like Open AI’s Chat GPT-3 and GPT-4, which have garnered the support of Microsoft.