ETV Bharat / technology

Govt Removes Permit Requirement For Untested AI Models; Calls For Labelling Content

The Central government has said that the internet companies no longer require approval before launching or deploying their AI models in the country.

The Central government has said that the internet companies no longer require approval before launching or deploying their AI models in the country.
Representative Image
author img

By PTI

Published : Mar 16, 2024, 12:13 PM IST

New Delhi: The government has dropped the permit requirement for untested AI models but emphasised the need to label AI-generated content, according to a latest advisory on Artificial Intelligence technology.

Instead of permission for AI models under development, the fresh advisory issued by the Ministry of Electronics and IT on Friday evening fine-tuned the compliance requirement as per IT Rules of 2021. "The advisory is issued in suppression of advisory...dated 1st March 2024," the advisory said.

It has been observed that IT firms and platforms are often negligent in undertaking due diligence obligations underlined under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code ) Rules, 2021, according to the new advisory.

The government has asked firms to label content generated using their AI software or platform and inform users about the possible inherent fallibility or unreliability of the output generated using their AI tools.

"Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in such a manner that such information may be used potentially as misinformation or deepfake, it is advised that such information created generated or modified through its software or any other computer resource is labelled....that such information has been created generated or modified using the computer resource of the intermediary," the advisory said.

In case any changes are made by the user, the metadata should be so configured to enable identification of such user or computer resource that has effected such change, it added.

After a controversy over a response of Google's AI platform to queries related to Prime Minister Narendra Modi, the government on March 1 issued an advisory for social media and other platforms to label under-trial AI models and prevent hosting unlawful content.

The Ministry of Electronics and Information Technology, in the advisory issued to intermediaries and platforms, warned of criminal action in case of non-compliance. The previous advisory has asked the entities to seek approval from the government for deploying under trial or unreliable artificial intelligence (AI) models and deploy them only after labelling them of "possible and inherent fallibility or unreliability of the output generated".

Read More

  1. Crime GPT: New AI Model to Help UP Police in Strengthening Security
  2. AI Supercharges Threat Of Disinformation In A Big Year For Elections Globally
  3. Must Take Permission before Launching AI Models, Safeguard Electoral Process: Centre to Big Tech

New Delhi: The government has dropped the permit requirement for untested AI models but emphasised the need to label AI-generated content, according to a latest advisory on Artificial Intelligence technology.

Instead of permission for AI models under development, the fresh advisory issued by the Ministry of Electronics and IT on Friday evening fine-tuned the compliance requirement as per IT Rules of 2021. "The advisory is issued in suppression of advisory...dated 1st March 2024," the advisory said.

It has been observed that IT firms and platforms are often negligent in undertaking due diligence obligations underlined under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code ) Rules, 2021, according to the new advisory.

The government has asked firms to label content generated using their AI software or platform and inform users about the possible inherent fallibility or unreliability of the output generated using their AI tools.

"Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in such a manner that such information may be used potentially as misinformation or deepfake, it is advised that such information created generated or modified through its software or any other computer resource is labelled....that such information has been created generated or modified using the computer resource of the intermediary," the advisory said.

In case any changes are made by the user, the metadata should be so configured to enable identification of such user or computer resource that has effected such change, it added.

After a controversy over a response of Google's AI platform to queries related to Prime Minister Narendra Modi, the government on March 1 issued an advisory for social media and other platforms to label under-trial AI models and prevent hosting unlawful content.

The Ministry of Electronics and Information Technology, in the advisory issued to intermediaries and platforms, warned of criminal action in case of non-compliance. The previous advisory has asked the entities to seek approval from the government for deploying under trial or unreliable artificial intelligence (AI) models and deploy them only after labelling them of "possible and inherent fallibility or unreliability of the output generated".

Read More

  1. Crime GPT: New AI Model to Help UP Police in Strengthening Security
  2. AI Supercharges Threat Of Disinformation In A Big Year For Elections Globally
  3. Must Take Permission before Launching AI Models, Safeguard Electoral Process: Centre to Big Tech
ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.