ETV Bharat / science-and-technology

Google AI flags parent account for potential abuse over nude photos of kid

Artificial Intelligence of the Tech giant Google has reportedly flagged a parents account for potential abuse over nude photos of their sick kids.

author img

By

Published : Aug 22, 2022, 11:06 AM IST

Google AI flags parent account for potential abuse over nude photos of kid
Google AI flags parent account for potential abuse over nude photos of kid

San Francisco: Tech giant Google's artificial intelligence (AI) has reportedly flagged parents' accounts for potential abuse over nude photos of their sick kids. The father said that after using his Android smartphone to take photos of an infection on his toddler's groin, the tech giant flagged the images as child sexual abuse material (CSAM), citing the NYT, The Verge reported.

The company closed his accounts and filed a report with the National Center for Missing and Exploited Children (NCMEC) and spurred a police investigation. It highlights the complications of trying to tell the difference between potential abuse and an innocent photo once it becomes part of a user's digital library, whether on their personal device or in cloud storage.

Also read: Google blocks the largest web DDoS cyber attack in the world ever

The incident occurred in February 2021, when some doctor's offices were still closed due to the Covid-19 pandemic. As per the report, Mark (whose last name was not revealed) noticed swelling in his child's genital region and, at the request of a nurse, sent images of the issue ahead of a video consultation. The doctor wound up prescribing antibiotics that cured the infection. Mark received a notification from Google two days after taking the photos, stating that his accounts had been locked due to "harmful content" that was "a severe violation of Google's policies and might be illegal."

Like many internet companies, including Facebook, Twitter, and Reddit, Google has used hash matching with Microsoft's PhotoDNA for scanning uploaded images to detect matches with known CSAM. In 2012, it led to the arrest of a man who was a registered sex offender and used Gmail to send images of a young girl. (IANS)

San Francisco: Tech giant Google's artificial intelligence (AI) has reportedly flagged parents' accounts for potential abuse over nude photos of their sick kids. The father said that after using his Android smartphone to take photos of an infection on his toddler's groin, the tech giant flagged the images as child sexual abuse material (CSAM), citing the NYT, The Verge reported.

The company closed his accounts and filed a report with the National Center for Missing and Exploited Children (NCMEC) and spurred a police investigation. It highlights the complications of trying to tell the difference between potential abuse and an innocent photo once it becomes part of a user's digital library, whether on their personal device or in cloud storage.

Also read: Google blocks the largest web DDoS cyber attack in the world ever

The incident occurred in February 2021, when some doctor's offices were still closed due to the Covid-19 pandemic. As per the report, Mark (whose last name was not revealed) noticed swelling in his child's genital region and, at the request of a nurse, sent images of the issue ahead of a video consultation. The doctor wound up prescribing antibiotics that cured the infection. Mark received a notification from Google two days after taking the photos, stating that his accounts had been locked due to "harmful content" that was "a severe violation of Google's policies and might be illegal."

Like many internet companies, including Facebook, Twitter, and Reddit, Google has used hash matching with Microsoft's PhotoDNA for scanning uploaded images to detect matches with known CSAM. In 2012, it led to the arrest of a man who was a registered sex offender and used Gmail to send images of a young girl. (IANS)

ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.