Highlights:
- The program’s “rules of engagement” enable OpenAI identify malicious attacks from good-faith hackers. These include following policy rules, exposing vulnerabilities, and not violating users’ privacy, interfering with systems, wiping data, or negatively harming user experience.
OpenAI LP, the creator of ChatGPT, has partnered with crowdsourced cybersecurity firm Bugcrowd Inc. to launch a bug bounty program to identify cybersecurity threats in its artificial intelligence models.
Security researchers that report vulnerabilities, defects, or security issues they find in OpenAI’s systems can receive incentives ranging from USD 200 to USD 20,000. The prize payout increases with the severity of a found bug.
Nevertheless, the bug bounty program does not cover model problems or non-cybersecurity concerns with the OpenAI API or ChatGPT. Bugcrowd noted in a blog post, “Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed. Addressing these issues often involves substantial research and a broader approach.”
Researchers participating in the program must also adhere to “rules of engagement” that will help OpenAI distinguish between malicious attacks and hacks conducted in good faith. They include abiding by the policy guidelines, disclosing vulnerabilities found, and not compromising users’ privacy, interfering with systems, erasing data, or negatively impacting their user experience.
Any vulnerabilities uncovered must likewise be kept private until they are approved for dissemination by OpenAI’s security team. The company’s security staff intends to issue authorization within 90 days of receiving a report.
Seems like stating the obvious, but security researchers are encouraged not to use extortion, threats, or other pressure techniques to induce a response. If any of these events occur, OpenAI will refuse safe harbor for any vulnerability revealed.
The revelation of the OpenAI bug bounty program has received a good response from the cybersecurity community.
Melissa Bischoping, Director of endpoint security research at Tanium Inc., told a lead media house, “While certain categories of bugs may be out-of-scope in the bug bounty, that doesn’t mean the organization isn’t prioritizing internal research and security initiatives around those categories. Often, scope limitations are to help ensure the organization can triage and follow up on all bugs, and scope may be adjusted over time. Issues with ChatGPT writing malicious code or other harm or safety concerns, while definitely a risk, are not the type of issue that often qualifies as a specific ‘bug,’ and are more of an issue with the training model itself.”