New Delhi:Penalty provisions can act as a deterrent to the development and dissemination of deepfakes and misinformation, a senior official of global think tank Cuts International said while calling for the deployment of technology interventions to check misuse of AI-generated content. CUTS International, Director, Research, Amol Kulkarni told PTI that internet users would require adequate opportunities to verify the genuineness of content and it becomes important during the election season while the role of credible fact-checkers and trusted flaggers becomes crucial.
He said that while the government advisory on March 15 removes permission requirements, it continues to rely on information disclosures to users for making the right choices on the Internet. "Though transparency is good, information overload and 'pop-ups' across user journeys may reduce their quality of experience. There is a need to balance the information requirements, with other implementable technological and accountability solutions which can address the problem of deepfakes and misinformation," Kulkarni said.
After a controversy over a response of Google's AI platform to queries related to Prime Minister Narendra Modi, the government on March 1 issued an advisory for social media and other platforms to label under-trial AI models and prevent hosting unlawful content. The Ministry of Electronics and Information Technology in the advisory issued to intermediaries and platforms warned of criminal action in case of non-compliance.