ETV Bharat / technology

AI Chatbots Have Shown They Have An 'Empathy Gap' That Children Are Likely To Miss

author img

By ETV Bharat Tech Team

Published : Jul 16, 2024, 6:07 PM IST

Research, by a University of Cambridge academic, Dr Nomisha Kurian reveals that when not designed with children's needs in mind, Artificial intelligence (AI) chatbots have an "empathy gap" that puts young users at particular risk of distress or harm.

AI Chatbots Have Shown They Have An 'Empathy Gap' That Children Are Likely To Miss
Representational image (Getty Images)

Hyderabad: A new study by the University of Cambridge has proposed a framework for 'Child Safe AI' following recent incidents which revealed that many children see chatbots as quasi-human and trustworthy.

According to the study, when not designed with children's needs in mind, Artificial intelligence (AI) chatbots have an "empathy gap" that puts young users at particular risk of distress or harm.

The research, by a University of Cambridge academic, Dr Nomisha Kurian, urges developers and policy actors to make 'child-safe AI' an urgent priority. It provides evidence that children are particularly susceptible to treating AI chatbots as lifelike, quasi-human confidantes and that their interactions with the technology can often go awry when it fails to respond to their unique needs and vulnerabilities.

The study links that gap in understanding to recent cases in which interactions with AI led to potentially dangerous situations for young users. They include an incident in 2021, when Amazon’s AI voice assistant, Alexa, instructed a 10-year-old to touch a live electrical plug with a coin. Last year, Snapchat’s My AI gave adult researchers posing as a 13-year-old girl tips on how to lose her virginity to a 31-year-old.

Both companies responded by implementing safety measures, but according to the study, there is also a need to be proactive in the long term to ensure that AI is child-safe. It offers a 28-item framework to help companies, teachers, school leaders, parents, developers and policy actors think systematically about how to keep younger users safe when they "talk" to AI chatbots.

Dr Kurian conducted the research while completing a PhD on child wellbeing at the Faculty of Education, University of Cambridge. She is now based in the Department of Sociology at Cambridge. She argues that AI has huge potential, which deepens the need to "innovate responsibly".

"Children are probably AI’s most overlooked stakeholders," Dr Kurian quipped. "Very few developers and companies currently have well-established policies on how child-safe AI looks and sounds. That is understandable because people have only recently started using this technology on a large scale for free. But now that they are, rather than having companies self-correct after children have been put at risk, child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring," added Dr Kurian.

Kurian’s study examined real-life cases where the interactions between AI and children, or adult researchers posing as children, exposed potential risks. It analysed these cases using insights from computer science about how the large language models (LLMs) in conversational generative AI function, alongside evidence about children's cognitive, social and emotional development.

LLMs have been described as "stochastic parrots": a reference to the fact that they currently use statistical probability to mimic language patterns without necessarily understanding them. A similar method underpins how they respond to emotions.

This means that even though chatbots have remarkable language abilities, they may handle the abstract, emotional and unpredictable aspects of conversation poorly; a problem that Kurian characterises as their "empathy gap". They may have particular trouble responding to children.

Hyderabad: A new study by the University of Cambridge has proposed a framework for 'Child Safe AI' following recent incidents which revealed that many children see chatbots as quasi-human and trustworthy.

According to the study, when not designed with children's needs in mind, Artificial intelligence (AI) chatbots have an "empathy gap" that puts young users at particular risk of distress or harm.

The research, by a University of Cambridge academic, Dr Nomisha Kurian, urges developers and policy actors to make 'child-safe AI' an urgent priority. It provides evidence that children are particularly susceptible to treating AI chatbots as lifelike, quasi-human confidantes and that their interactions with the technology can often go awry when it fails to respond to their unique needs and vulnerabilities.

The study links that gap in understanding to recent cases in which interactions with AI led to potentially dangerous situations for young users. They include an incident in 2021, when Amazon’s AI voice assistant, Alexa, instructed a 10-year-old to touch a live electrical plug with a coin. Last year, Snapchat’s My AI gave adult researchers posing as a 13-year-old girl tips on how to lose her virginity to a 31-year-old.

Both companies responded by implementing safety measures, but according to the study, there is also a need to be proactive in the long term to ensure that AI is child-safe. It offers a 28-item framework to help companies, teachers, school leaders, parents, developers and policy actors think systematically about how to keep younger users safe when they "talk" to AI chatbots.

Dr Kurian conducted the research while completing a PhD on child wellbeing at the Faculty of Education, University of Cambridge. She is now based in the Department of Sociology at Cambridge. She argues that AI has huge potential, which deepens the need to "innovate responsibly".

"Children are probably AI’s most overlooked stakeholders," Dr Kurian quipped. "Very few developers and companies currently have well-established policies on how child-safe AI looks and sounds. That is understandable because people have only recently started using this technology on a large scale for free. But now that they are, rather than having companies self-correct after children have been put at risk, child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring," added Dr Kurian.

Kurian’s study examined real-life cases where the interactions between AI and children, or adult researchers posing as children, exposed potential risks. It analysed these cases using insights from computer science about how the large language models (LLMs) in conversational generative AI function, alongside evidence about children's cognitive, social and emotional development.

LLMs have been described as "stochastic parrots": a reference to the fact that they currently use statistical probability to mimic language patterns without necessarily understanding them. A similar method underpins how they respond to emotions.

This means that even though chatbots have remarkable language abilities, they may handle the abstract, emotional and unpredictable aspects of conversation poorly; a problem that Kurian characterises as their "empathy gap". They may have particular trouble responding to children.

ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.