New Delhi: Google has categorically told developers that all apps, including AI content generators, must comply with its existing developer policies, which prohibit the generation of restricted content like child sexual abuse material (CSAM) and content that enables "deceptive behaviour".
The company announced updates to its developer policies to further elevate the quality of apps on Google Play. In line with its commitment to responsible AI practices, Google said it wants to help ensure AI-generated content is safe for people and that their feedback is incorporated.
"Early next year, we'll be requiring developers to provide the ability to report or flag offensive AI-generated content without needing to exit the app," the tech giant said in a statement.
The developers can utilise these reports to inform content filtering and moderation in your apps – similar to the in-app reporting system required under the 'User Generated Content' policies.
"As a reminder, apps that generate content using AI must also continue to comply with all other developer policies," Google noted. To safeguard privacy, some app permissions require an additional review by the Google Play team and have additional guardrails.