New Delhi: Microsoft-owned OpenAI, the developer of ChatGPT, is now offering up to $20,000 to security researchers to help the company distinguish between good-faith hacking and malicious attacks, as it suffered a security breach last month. OpenAI has launched a bug bounty programme for ChatGPT and other products, saying the initial priority rating for most findings will use the 'Bugcrowd Vulnerability Rating Taxonomy'.
"Our rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries," the AI research company said. "However, vulnerability priority and reward may be modified based on likelihood or impact at OpenAI's sole discretion. In cases of downgraded issues, researchers will receive a detailed explanation, it added.
The security researchers, however, are not authorised to conduct security testing on plugins created by other people. OpenAI is also asking ethical hackers to safeguard confidential OpenAI corporate information that may be exposed through third parties. Some examples in this category include Google Workspace, Asana, Trello, Jira, Monday.com, Zendesk, Salesforce and Stripe.