This summary text is fully AI-generated and may therefore contain errors or be incomplete.
The artificial intelligence (AI) developer OpenAI has announced the implementation of its “Preparedness Framework” to assess and anticipate risks. OpenAI has created a specialized team, known as the “Preparedness Team,” which will serve as a link between the safety and policy teams within the company. This collaborative approach aims to safeguard against potential “catastrophic risks” associated with increasingly powerful AI models. OpenAI has committed to deploying its technology only if it is deemed safe.
The new plan involves the advisory team reviewing safety reports, which will then be shared with company executives and the OpenAI board. While the executives hold decision-making authority, the board now has the power to overturn safety decisions. These measures come in the wake of recent changes at OpenAI, including the temporary dismissal and subsequent reinstatement of Sam Altman as CEO. The company has also announced a new board, chaired by Bret Taylor and featuring Larry Summers and Adam D’Angelo.
OpenAI’s launch of ChatGPT in November 2022 has generated significant interest in AI, but concerns about its potential societal impact persist. In response, leading AI developers, including OpenAI, Microsoft, Google, and Anthropic, established The Frontier Forum to self-regulate responsible AI development. The Biden Administration has further addressed AI safety by issuing an executive order that sets new standards for companies developing high-level AI models and their implementation. OpenAI was among the companies invited to the White House to commit to developing safe and transparent AI models prior to the executive order.
