ETV Bharat / science-and-technology

OpenAI outlines Artificial Intelligence safety plan, gives board veto power on risky AI

Sam Altman-run OpenAI has expanded its internal safety processes to tackle the threat of harmful AI amid increased government scrutiny. The company said it will establish a dedicated team to oversee technical work and an operational structure for decision-making on safety.

OpenAI, a leading artificial intelligence company backed by Microsoft, has introduced a safety framework for its advanced models which includes a provision for the board to overturn safety decisions, as outlined in a plan released on its website on Monday.
File: OpenAI
author img

By ETV Bharat English Team

Published : Dec 19, 2023, 12:10 PM IST

Updated : Dec 19, 2023, 1:04 PM IST

Hyderabad: OpenAI, a leading artificial intelligence company backed by Microsoft, has introduced a safety framework for its advanced models which includes a provision for the board to overturn safety decisions, as outlined in a plan released on its website on Monday.

“We are creating a cross-functional Safety Advisory Group to review all reports and send them concurrently to Leadership and the Board of Directors. While Leadership is the decision-maker, the Board of Directors holds the right to reverse decisions,” it said late on Monday.

The deployment of OpenAI's latest technology will be contingent on safety evaluations in critical domains like cybersecurity and nuclear threats. Additionally, the company is establishing an advisory group to scrutinise safety reports, forwarding recommendations to both executives and the board. While executives hold decision-making authority, the board retains the ability to reverse such decisions.

“We will run evaluations and continually update 'scorecards' for our models. We will evaluate all our frontier models, including at every 2x effective compute increase during training runs. We will push models to their limits,” said the ChatGPT maker.

“We will also implement additional security measures tailored to models with high or critical (pre-mitigation) levels of risk,” said the company.

OpenAI said it will develop protocols for added safety and outside accountability. "The Preparedness Team will conduct regular safety drills to stress-test against the pressures of our business and our own culture," it added.

Since the launch of ChatGPT a year ago, concerns regarding the potential hazards of AI have been prominent among both AI researchers and the general public. Generative AI technology, while impressing users with its creative capabilities in writing poetry and essays, has raised apprehensions about safety, particularly in its capacity to disseminate disinformation and influence human behaviour.

In April, a coalition of AI industry leaders and experts advocated for a six-month hiatus in developing systems surpassing the potency of OpenAI's GPT-4, citing potential societal risks. (with IANS inputs)

Hyderabad: OpenAI, a leading artificial intelligence company backed by Microsoft, has introduced a safety framework for its advanced models which includes a provision for the board to overturn safety decisions, as outlined in a plan released on its website on Monday.

“We are creating a cross-functional Safety Advisory Group to review all reports and send them concurrently to Leadership and the Board of Directors. While Leadership is the decision-maker, the Board of Directors holds the right to reverse decisions,” it said late on Monday.

The deployment of OpenAI's latest technology will be contingent on safety evaluations in critical domains like cybersecurity and nuclear threats. Additionally, the company is establishing an advisory group to scrutinise safety reports, forwarding recommendations to both executives and the board. While executives hold decision-making authority, the board retains the ability to reverse such decisions.

“We will run evaluations and continually update 'scorecards' for our models. We will evaluate all our frontier models, including at every 2x effective compute increase during training runs. We will push models to their limits,” said the ChatGPT maker.

“We will also implement additional security measures tailored to models with high or critical (pre-mitigation) levels of risk,” said the company.

OpenAI said it will develop protocols for added safety and outside accountability. "The Preparedness Team will conduct regular safety drills to stress-test against the pressures of our business and our own culture," it added.

Since the launch of ChatGPT a year ago, concerns regarding the potential hazards of AI have been prominent among both AI researchers and the general public. Generative AI technology, while impressing users with its creative capabilities in writing poetry and essays, has raised apprehensions about safety, particularly in its capacity to disseminate disinformation and influence human behaviour.

In April, a coalition of AI industry leaders and experts advocated for a six-month hiatus in developing systems surpassing the potency of OpenAI's GPT-4, citing potential societal risks. (with IANS inputs)

Last Updated : Dec 19, 2023, 1:04 PM IST
ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.