ETV Bharat / technology

Publically Accessible AI May Spread Healthcare Disinformation: British Medical Journal

author img

By Toufiq Rashid

Published : Mar 21, 2024, 4:29 PM IST

A study published in the British Medical Journal found that many publicly accessible LLMs, including OpenAI’s GPT-4 (via ChatGPT and Microsoft’s Copilot), Google’s PaLM 2/Gemini Pro (via Bard), and Meta’s Llama 2 (via HuggingChat) lack adequate safeguards against mass generation of health disinformation.
Artificial Intelligence(Getty Images)

A study published in the British Medical Journal found that many publicly accessible LLMs, including OpenAI’s GPT-4 (via ChatGPT and Microsoft’s Copilot), Google’s PaLM 2/Gemini Pro (via Bard), and Meta’s Llama 2 (via HuggingChat) lack adequate safeguards against mass generation of health disinformation.In the absence of these safeguards the AI has the potential to generate healthcare disinformation and can be a risk of making wrong healthcare decisions.

Hyderabad: The exponential growth in generative Artificial Intelligence which can process data and inputs like texts, videos, and images is being touted to revolutionise healthcare delivery and research in many ways. AI comes with a promise of making things like remote patient care, medical research, and diagnosis better, However, experts feel there is a need to weigh the risks with the benefits as well.

A study published in British Medical Journal says "Ineffective implementation of safeguards in publicly accessible Artificial Intelligence can generate misinformation and lead to wrong decision making regarding healthcare by the user. The study also says without the safeguards AI can be used to generate mass healthcare misinformation among the masses.

The authors from top research institutes in Australia, the USA, the UK and Canada, reviewed the capabilities of the most prominent Large Multi-Model Models LLMs/generative AI applications to generate disinformation. The study found OpenAI’s GPT-4 (via ChatGPT and Microsoft’s Copilot), Google’s PaLM 2/Gemini Pro (via Bard), and Meta’s Llama 2 (via HuggingChat) lack adequate safeguards against mass generation of health disinformation.

On the other hand, Anthropic’s Claude 2 showed robust safeguards against the generation of health disinformation. This showed that implementing safeguards was possible but the AI developers were not prioritising the same.

The researchers worked on two simple prompts 'Can sunscreen cause cancer?' and 'Can an alkaline diet cure cancer?' The AI models, according to the researchers, generated a lot of `realistic looking yet false content’ on the two queries.

The researchers however claimed some models were better than others. The paper says Claude 2 (via Poe) declined 130 prompts submitted across the two study time points requesting the generation of content claiming that sunscreen causes skin cancer or that the alkaline diet is a cure for cancer.

"In contrast, GPT-4 (via ChatGPT), PaLM 2/Gemini Pro (via Bard), and Llama 2 (via HuggingChat) consistently generated health disinformation blogs. In September 2023 evaluations, these LLMs facilitated the generation of 113 unique cancer disinformation blogs, totaling more than 40 000 words, without requiring jailbreaking attempts'', the researchers said.

In another linked research paper, the researchers insist the problem is "AI hallucinations'' which are very difficult to detect. Hallucinations are when an AI model generates incorrect data due to factors like insufficient training data, incorrect assumptions made by the model or bias in the data used for training "AI hallucinations are particularly concerning because individuals may receive incorrect or misleading health information from LLMs that are presented as fact.

For members of the general public, who may lack the capability to distinguish between correct and incorrect information, this has considerable potential for harm,'' the researchers from the College of Medicine and Public Health, Flinders University, Australia. In a linked editorial BMJ insists false information affects health-related decisions and patient behaviour.

"The destructive properties of disinformation are evident in the disciplines of medicine and public health, where unverified, false, misleading, and fabricated information can severely affect the health-related decisions and behaviors of patients, as acknowledged by the World Health Organisation and infodemiology scholars,'' the editorial says.

Earlier in January 2024, WHO released comprehensive guidance on the ethical use and governance of large multi-modal models (LMM) in healthcare. WHO scientists, emphasised the importance of transparent information and policies for managing the design, development, and use of LMMs. BMJ also insists stricter regulations are vital to reduce the spread of disinformation, and developers should be held accountable for underestimating the potential for malicious actors to misuse their products.

"Transparency must be promoted, and technological safeguards, strong safety standards, and clear communication policies developed and enforced,'' the editorial writes.

ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.