Elon Musk AI Witness Warns Of AGI Arms Race

Category :

AI

Posted On :

Share This :

When should we start paying attention to AI doomers?

 

Elon Musk’s attempt to shut down OpenAI’s for-profit AI enterprise is mostly motivated by this. His lawyers contend that the group was founded as a nonprofit with an emphasis on AI safety but became lost in its quest for wealth. They use old emails and quotes from the organization’s founders regarding the necessity of a public-spirited counterweight to Google DeepMind to demonstrate this.

 

Stuart Russell, a computer science professor at the University of California, Berkeley who has spent decades studying AI, was the only expert witness summoned today to speak specifically about AI technology. It was his responsibility to provide background information on AI and demonstrate that this technology poses a significant risk.

 

In March 2023, Russell signed an open letter urging a six-month halt on AI research. The fact that Musk signed the same letter while starting his own for-profit AI lab, xAI, indicates inconsistencies in this situation.

 

Russell explained to jurors and Judge Yvonne Gonzalez Rogers that the creation of artificial general intelligence (AGI) carries some hazards, from cybersecurity vulnerabilities to issues with misalignment and the winner-take-all nature of the process. In the end, he claimed that the pursuit of AGI and safety were at odds.

 

After the judge limited Russell’s evidence due to objections from OpenAI’s attorneys, Russell’s broader worries about the existential dangers of unrestrained AI were not discussed in public. However, Russell has long advocated for stricter government regulation of the field and criticized the arms-race dynamic created by frontier labs worldwide vying to achieve AGI first.

 

During their cross-examination, OpenAI’s lawyers made it clear that Russell was not directly assessing the company’s organizational structure or its particular safety regulations.

 

However, the relationship between corporate greed and AI safety concerns will be evaluated by this reporter, the judge, and the jurors. Almost all of the OpenAI founders have vehemently cautioned about the dangers of AI while simultaneously highlighting its advantages, working to develop it as quickly as possible, and developing strategies for AI-focused for-profit businesses under their control.

 

From the outside, one obvious problem here is that, since its establishment, OpenAI has come to realize that, in order for the company to prosper, it just needs to invest more in computing. Only for-profit investors could provide that funding. The founding team’s concern about AGI in the hands of one group drove them to look for funding, which ultimately caused the team to fall apart, sparking the current arms race and leading to this lawsuit.

 

At the national level, the same dynamic is already taking place: Elon Musk, Sam Altman, Geoffrey Hinton, and others have expressed concerns about AI, which are echoed by Senator Bernie Sanders’ demand for legislation placing a ban on data center building. “It is unclear why the public should discount everything tech billionaires say except when their words can be recruited to fill gaps in a precarious argument,” Hodan Omaar, who works at the trade group the Center for Data Innovation, said in response to Sanders mentioning their concerns without their hopes.

 

Both parties are now requesting that the court do precisely that, take some of Altman’s and Musk’s arguments seriously while ignoring the portions that don’t support their legal position.