ETV Bharat / bharat

Penalty provisions for development, dissemination of deepfakes can create deterrent effect:CUTS

CUTS International, Director, Research, Amol Kulkarni said that internet users would require adequate opportunities to verify the genuineness of content and it becomes important during the election season while the role of credible fact-checkers and trusted flaggers becomes crucial.

Penalty provisions for development, dissemination of deepfakes can create deterrent effect:CUTS
Penalty provisions for development, dissemination of deepfakes can create deterrent effect:CUTS
author img

By PTI

Published : Mar 24, 2024, 8:53 PM IST

Updated : Mar 24, 2024, 10:15 PM IST

New Delhi: Penalty provisions can act as a deterrent to the development and dissemination of deepfakes and misinformation, a senior official of global think tank Cuts International said while calling for the deployment of technology interventions to check misuse of AI-generated content. CUTS International, Director, Research, Amol Kulkarni told PTI that internet users would require adequate opportunities to verify the genuineness of content and it becomes important during the election season while the role of credible fact-checkers and trusted flaggers becomes crucial.

He said that while the government advisory on March 15 removes permission requirements, it continues to rely on information disclosures to users for making the right choices on the Internet. "Though transparency is good, information overload and 'pop-ups' across user journeys may reduce their quality of experience. There is a need to balance the information requirements, with other implementable technological and accountability solutions which can address the problem of deepfakes and misinformation," Kulkarni said.

After a controversy over a response of Google's AI platform to queries related to Prime Minister Narendra Modi, the government on March 1 issued an advisory for social media and other platforms to label under-trial AI models and prevent hosting unlawful content. The Ministry of Electronics and Information Technology in the advisory issued to intermediaries and platforms warned of criminal action in case of non-compliance.

The previous advisory has asked the entities to seek approval from the government for deploying under trial or unreliable artificial intelligence (AI) models and deploy them only after labelling them of "possible and inherent fallibility or unreliability of the output generated". The Ministry of Electronics and IT on March 15 issued a revised advisory on the use and rollout of AI-generated content.

The IT ministry removed the need for government approval for untested and under-development AI models but emphasised the need for labelling AI-generated content and information to users about the possible inherent fallibility and unreliability of the output generated. Kulkarni said that addressing the issue of deepfakes and misinformation will require clarifying the responsibility of all stakeholders in the internet ecosystem: developers, uploaders, disseminators, platforms and consumers of content.

"Penalty provisions for the development and dissemination of harmful deepfakes and misinformation could also create a deterrent effect. Technological solutions to tag potentially harmful content and shifting the burden on developers and disseminators to justify the use of such content could also be designed," he said.

New Delhi: Penalty provisions can act as a deterrent to the development and dissemination of deepfakes and misinformation, a senior official of global think tank Cuts International said while calling for the deployment of technology interventions to check misuse of AI-generated content. CUTS International, Director, Research, Amol Kulkarni told PTI that internet users would require adequate opportunities to verify the genuineness of content and it becomes important during the election season while the role of credible fact-checkers and trusted flaggers becomes crucial.

He said that while the government advisory on March 15 removes permission requirements, it continues to rely on information disclosures to users for making the right choices on the Internet. "Though transparency is good, information overload and 'pop-ups' across user journeys may reduce their quality of experience. There is a need to balance the information requirements, with other implementable technological and accountability solutions which can address the problem of deepfakes and misinformation," Kulkarni said.

After a controversy over a response of Google's AI platform to queries related to Prime Minister Narendra Modi, the government on March 1 issued an advisory for social media and other platforms to label under-trial AI models and prevent hosting unlawful content. The Ministry of Electronics and Information Technology in the advisory issued to intermediaries and platforms warned of criminal action in case of non-compliance.

The previous advisory has asked the entities to seek approval from the government for deploying under trial or unreliable artificial intelligence (AI) models and deploy them only after labelling them of "possible and inherent fallibility or unreliability of the output generated". The Ministry of Electronics and IT on March 15 issued a revised advisory on the use and rollout of AI-generated content.

The IT ministry removed the need for government approval for untested and under-development AI models but emphasised the need for labelling AI-generated content and information to users about the possible inherent fallibility and unreliability of the output generated. Kulkarni said that addressing the issue of deepfakes and misinformation will require clarifying the responsibility of all stakeholders in the internet ecosystem: developers, uploaders, disseminators, platforms and consumers of content.

"Penalty provisions for the development and dissemination of harmful deepfakes and misinformation could also create a deterrent effect. Technological solutions to tag potentially harmful content and shifting the burden on developers and disseminators to justify the use of such content could also be designed," he said.

Last Updated : Mar 24, 2024, 10:15 PM IST
ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.