Clicky chatsimple

All Insiders Make Up The New Safety Committee For OpenAI

Category :


Posted On :

Share This :

A new committee has been established by OpenAI to supervise “critical” safety and security choices pertaining to the business’s initiatives and activities. However, rather of appointing outside observers to the committee, OpenAI has decided to staff it with business insiders, including Sam Altman, the company’s CEO, a decision that is sure to anger ethicists.

According to a post on the company’s corporate blog, Altman and the other members of the Safety and Security Committee, which include chief scientist Jakub Pachocki, chief scientist Aleksander Madry (who oversees OpenAI’s “preparedness” team), Lilian Weng (head of safety systems), Matt Knight (head of security), and John Schulman (head of “alignment science”), will be in charge of assessing OpenAI’s safety procedures and safeguards over the course of the next ninety days. According to OpenAI, the committee will subsequently provide its conclusions and recommendations to the entire board of directors for consideration. Following this, it will release an update on any proposals that are accepted “in a manner that is consistent with safety and security.”

“We expect the resulting systems to take us to the next level of capabilities on our path to [artificial general intelligence],” says OpenAI, adding that the company has just started training its next frontier model. “We welcome a robust debate at this crucial juncture, even though we are proud to build and release models that are industry-leading in both capabilities and safety.”

Several prominent members of OpenAI’s technical team that were involved in safety have left in recent months, and some of these former employees have expressed concerns about what they believe to be a deliberate de-prioritization of AI safety.

Working on the governance team at OpenAI, Daniel Kokotajlo resigned in April, citing a lack of faith in OpenAI’s ability to “behave responsibly” while releasing more advanced AI in a post on his personal blog. Additionally, Ilya Sutskever, a co-founder of OpenAI and the previous head scientist of the firm, departed in May following a protracted conflict with Altman and Altman’s friends, allegedly due in part to Altman’s haste to provide AI-powered products at the expense of safety research.

Jan Leike, a former researcher at DeepMind who worked at OpenAI on ChatGPT and its predecessor, InstructGPT, recently resigned from his position leading safety research. He stated in a series of posts on X that he thought OpenAI “wasn’t on the trajectory” to get issues related to AI security and safety “right.” Gretchen Krueger, an AI policy researcher who just departed from OpenAI, echoed Leike’s remarks, urging the business to increase responsibility, openness, and “the care with which [it uses its] own technology.”

Apart from Sutskever, Kokotajlo, Leike, and Krueger, Quartz reports that since late last year, at least five of OpenAI’s most safety-conscious workers—including former board members Helen Toner and Tasha McCauley—have either resigned or been forced out. With Altman leading the charge, Toner and McCauley stated in an opinion piece for The Economist that was published on Sunday that they didn’t think OpenAI could be relied upon to be responsible for its own actions.

Toner and McCauley stated, “[Based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives.”

In response to Toner and McCauley’s points, TechCrunch revealed earlier this month that 20% of OpenAI’s computational resources were pledged to the Superalignment team, which is in charge of creating guidelines and direction for “superintelligent” AI systems, but the team rarely received even a small portion of that. Since then, the Superalignment team has disbanded, with much of its work falling under Schulman’s jurisdiction and the purview of OpenAI, a safety advisory group that was established in December.

OpenAI has pushed for laws governing AI. Simultaneously, it has spent hundreds of thousands of dollars on U.S. lobbying in Q4 2023 alone, hiring an in-house lobbyist and lobbyists at an increasing number of law firms in an attempt to change that policy. The Artificial Intelligence Safety and Security Board, which was recently established by the U.S. Department of Homeland Security, will make recommendations for the “safe and secure development and deployment of AI” across the country’s vital infrastructures. Altman will be one of the board members.

OpenAI has promised to retain third-party “safety, security, and technical” experts to support the Safety and Security Committee’s work, including cybersecurity veteran Rob Joyce and former U.S. Department of Justice official John Carlin, in an effort to avoid the appearance of ethical fig-leafing with the exec-dominated committee. Beyond Joyce and Carlin, though, the firm hasn’t disclosed the composition or number of this outside expert group, nor has it clarified the group’s authority and influence over the committee’s boundaries.

Similar to Google’s AI oversight boards, such as its Advanced Technology External Advisory Council, corporate oversight boards, such as the Safety and Security Committee, “[do] virtually nothing in the way of actual oversight,” according to a piece by Bloomberg columnist Parmy Olson on X. It’s interesting to note that OpenAI claims it wants to use the committee to address “valid criticisms” of its work; of course, what constitutes a “valid criticism” depends on who makes it.

Altman previously stated that outsiders would be integral to OpenAI’s governance. He stated that OpenAI would “[plan] a way to allow wide swaths of the world to elect representatives to a… governance board” in a 2016 New Yorker article. That never happened, and at this point, it doesn’t look like it will.