ETV Bharat / technology

AI Can Dispel Conspiracy Theories, False Beliefs: Study

A new study has found that Artificial Intelligence (AI) can be used to dispel conspiracy theories and false beliefs from people’s minds. The study, conducted by researchers from American University, Massachusetts Institute of Technology and Cornell University, found how generative AI, when used responsibly, can mitigate conflicts and serve society positively. ETV Bharat takes you through the study and its findings.

Representational
Representational (Getty Images)
author img

By Aroonim Bhuyan

Published : Sep 23, 2024, 9:06 PM IST

New Delhi: Ever thought that man’s first landing on the moon was stage-managed? Or that the COVID-19 virus was unleashed as a bioweapon? Or for that matter the recent assassination attempt on Donald Trump was a bid to boost his popularity ratings ahead of this year’s US presidential election? Or, closer home, idols of Lord Ganesha were actually drinking milk offerings way back in 1995?

Maybe you have been harbouring such beliefs for long amid lingering doubts. Or maybe others might have implanted such beliefs in your mind. Relax. Artificial Intelligence (AI) can dispel all such conspiracy theories and false beliefs, a new study has found.

The study, titled “Durably reducing conspiracy beliefs through dialogues with AI” published in Science journal earlier this month, found that using a version of ChatGPT by feeding it conspiracy theories and false beliefs can actually dissuade people from harbouring such thoughts. The study was conducted by Thomas Costello, lead author and assistant professor at American University, David Rand, Massachusetts Institute of Technology (MIT) Sloan School of Management professor, and Gordon Pennycook, associate professor of psychology and Himan Brown Faculty Fellow in Cornell University’s College of Arts and Sciences.

Generative AI has often been blamed for the spread of misinformation/fake news. However, the study found that the opposite can also be true – generative AI can also dispel misinformation from among people’s minds.

According to the editor’s summary of the study, amid growing threats to democracy, Costello and his team investigated whether dialogues with a generative AI interface could convince people to abandon their conspiratorial beliefs.

“Human participants described a conspiracy theory that they subscribed to, and the AI then engaged in persuasive arguments with them that refuted their beliefs with evidence,” the summary of the study states. “The AI chatbot’s ability to sustain tailored counterarguments and personalised in-depth conversations reduced their beliefs in conspiracies for months, challenging research suggesting that such beliefs are impervious to change. This intervention illustrates how deploying AI may mitigate conflicts and serve society.”

Costello and his team wanted to know whether large language models (LLMs) such as GPT-4 Turbo, a version of ChatGPT, which process and generate huge amounts of information in seconds, could debunk conspiracy theories with what Costello describes as “tailored persuasions”.

LLMs are a category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks. LLMs have become a household name thanks to the role they have played in bringing generative AI to the forefront of the public interest, as well as the point on which organisations are focusing to adopt AI across numerous business functions and use cases.

LLMs represent a significant breakthrough in natural language processing (NLP) capabilities and AI, and are accessible to the public through interfaces like Open AI’s Chat GPT-3 and GPT-4, which have garnered the support of Microsoft.

Simply put, LLMs are designed to understand and generate text like a human, in addition to other forms of content, based on the vast amount of data used to train them. They have the ability to infer from context, generate coherent and contextually relevant responses, translate to languages other than English, summarise text, answer questions and even assist in creative writing or code generation tasks.

Costello and his colleagues wanted to know whether LLMs such as GPT-4 Turbo, which process and generate huge amounts of information in seconds, could debunk conspiracy theories with what Costello describes as “tailored persuasions”, according to a report in Science describing the study.

For the study, the researchers recruited over 2,000 volunteers who held one or the other conspiracy theory belief. They were then asked to type in their belief in the GPT-4 Turbo. Each person shared with the AI what s/he believed, the evidence s/he felt supported it, and rated how confident s/he was that the theory was true. Three rounds of conversation were held. The chatbot - trained on a wide range of publicly available information from books, online discussions, and other sources - refuted each claim with specific, fact-based counterarguments.

The AI’s response reduced participants’ belief in their chosen conspiracy theory by 20 percent on average. This effect persisted undiminished for at least two months. When a professional fact-checker evaluated a sample of 128 claims made by the AI, 99.2 percent were true, 0.8 percent were misleading, and none were false.

What helped in convincing the volunteers was the amenable way in which AI presented its views. This contrasts with a human being’s attempt to convince a fellow human being out of a conspiracy theory belief which can often get heated and argumentative.

“This research indicates that evidence matters much more than we thought it did – so long as it is actually related to people’s beliefs,” the Cornell Chronicle quoted Pennycook, one of the researchers, as saying. “This has implications far beyond just conspiracy theories: any number of beliefs based on poor evidence could, in theory, be undermined using this approach.”

Lead author Costello said that he was quite surprised at first, “but reading through the conversations made much me less sceptical”.

“The AI provided page-long, highly detailed accounts of why the given conspiracy was false in each round of conversation - and was also adept at being amiable and building rapport with the participants,” he said.

The study concluded that many people who strongly believe in seemingly fact-resistant conspiratorial beliefs can change their minds when presented with compelling evidence.

“Practically, by demonstrating the persuasive power of LLMs, our findings emphasise both the potential positive impacts of generative AI when deployed responsibly and the pressing importance of minimising opportunities for this technology to be used irresponsibly,” the researchers asserted.

New Delhi: Ever thought that man’s first landing on the moon was stage-managed? Or that the COVID-19 virus was unleashed as a bioweapon? Or for that matter the recent assassination attempt on Donald Trump was a bid to boost his popularity ratings ahead of this year’s US presidential election? Or, closer home, idols of Lord Ganesha were actually drinking milk offerings way back in 1995?

Maybe you have been harbouring such beliefs for long amid lingering doubts. Or maybe others might have implanted such beliefs in your mind. Relax. Artificial Intelligence (AI) can dispel all such conspiracy theories and false beliefs, a new study has found.

The study, titled “Durably reducing conspiracy beliefs through dialogues with AI” published in Science journal earlier this month, found that using a version of ChatGPT by feeding it conspiracy theories and false beliefs can actually dissuade people from harbouring such thoughts. The study was conducted by Thomas Costello, lead author and assistant professor at American University, David Rand, Massachusetts Institute of Technology (MIT) Sloan School of Management professor, and Gordon Pennycook, associate professor of psychology and Himan Brown Faculty Fellow in Cornell University’s College of Arts and Sciences.

Generative AI has often been blamed for the spread of misinformation/fake news. However, the study found that the opposite can also be true – generative AI can also dispel misinformation from among people’s minds.

According to the editor’s summary of the study, amid growing threats to democracy, Costello and his team investigated whether dialogues with a generative AI interface could convince people to abandon their conspiratorial beliefs.

“Human participants described a conspiracy theory that they subscribed to, and the AI then engaged in persuasive arguments with them that refuted their beliefs with evidence,” the summary of the study states. “The AI chatbot’s ability to sustain tailored counterarguments and personalised in-depth conversations reduced their beliefs in conspiracies for months, challenging research suggesting that such beliefs are impervious to change. This intervention illustrates how deploying AI may mitigate conflicts and serve society.”

Costello and his team wanted to know whether large language models (LLMs) such as GPT-4 Turbo, a version of ChatGPT, which process and generate huge amounts of information in seconds, could debunk conspiracy theories with what Costello describes as “tailored persuasions”.

LLMs are a category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks. LLMs have become a household name thanks to the role they have played in bringing generative AI to the forefront of the public interest, as well as the point on which organisations are focusing to adopt AI across numerous business functions and use cases.

LLMs represent a significant breakthrough in natural language processing (NLP) capabilities and AI, and are accessible to the public through interfaces like Open AI’s Chat GPT-3 and GPT-4, which have garnered the support of Microsoft.

Simply put, LLMs are designed to understand and generate text like a human, in addition to other forms of content, based on the vast amount of data used to train them. They have the ability to infer from context, generate coherent and contextually relevant responses, translate to languages other than English, summarise text, answer questions and even assist in creative writing or code generation tasks.

Costello and his colleagues wanted to know whether LLMs such as GPT-4 Turbo, which process and generate huge amounts of information in seconds, could debunk conspiracy theories with what Costello describes as “tailored persuasions”, according to a report in Science describing the study.

For the study, the researchers recruited over 2,000 volunteers who held one or the other conspiracy theory belief. They were then asked to type in their belief in the GPT-4 Turbo. Each person shared with the AI what s/he believed, the evidence s/he felt supported it, and rated how confident s/he was that the theory was true. Three rounds of conversation were held. The chatbot - trained on a wide range of publicly available information from books, online discussions, and other sources - refuted each claim with specific, fact-based counterarguments.

The AI’s response reduced participants’ belief in their chosen conspiracy theory by 20 percent on average. This effect persisted undiminished for at least two months. When a professional fact-checker evaluated a sample of 128 claims made by the AI, 99.2 percent were true, 0.8 percent were misleading, and none were false.

What helped in convincing the volunteers was the amenable way in which AI presented its views. This contrasts with a human being’s attempt to convince a fellow human being out of a conspiracy theory belief which can often get heated and argumentative.

“This research indicates that evidence matters much more than we thought it did – so long as it is actually related to people’s beliefs,” the Cornell Chronicle quoted Pennycook, one of the researchers, as saying. “This has implications far beyond just conspiracy theories: any number of beliefs based on poor evidence could, in theory, be undermined using this approach.”

Lead author Costello said that he was quite surprised at first, “but reading through the conversations made much me less sceptical”.

“The AI provided page-long, highly detailed accounts of why the given conspiracy was false in each round of conversation - and was also adept at being amiable and building rapport with the participants,” he said.

The study concluded that many people who strongly believe in seemingly fact-resistant conspiratorial beliefs can change their minds when presented with compelling evidence.

“Practically, by demonstrating the persuasive power of LLMs, our findings emphasise both the potential positive impacts of generative AI when deployed responsibly and the pressing importance of minimising opportunities for this technology to be used irresponsibly,” the researchers asserted.

ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.