Clicky chatsimple

OpenAI’s Growing Legal Problems

Category :


Posted On :

Share This :

The San Francisco-based startup OpenAI is currently facing several difficulties that could jeopardize its standing as a leader in artificial intelligence research, following a year of enjoying widespread recognition.

Some of its issues are the result of decisions made even before ChatGPT launched, most notably its peculiar transformation from an idealistic charity to a large company supported by investments totaling billions of dollars.

It’s too early to know if any of the numerous lawsuits that OpenAI and its lawyers are facing from Elon Musk, the New York Times, and best-selling authors like John Grisham—not to mention the increasing scrutiny from government regulators—will succeed.


Before openly defending itself against legal accusations made by billionaire Elon Musk, who first funded OpenAI and now says the company has abandoned its founding nonprofit objective to advance humanity in favor of profits, OpenAI is not waiting for the legal process to conclude.

OpenAI promised to have the lawsuit dismissed in its first response since the Tesla CEO filed a lawsuit last week. It also disclosed emails allegedly from Musk suggesting that he was in favor of turning OpenAI into a for-profit business and even floated the idea of merging it with the manufacturer of electric vehicles.

Legal professionals are dubious about Musk’s claims, which revolve around a purported contract violation, standing up in court. Internal disputes within the company over its unique governance structure, how “open” it should be about its research, and how to pursue artificial general intelligence—AI systems that can perform on par with or even better than humans in a wide range of tasks—have already been forced to the surface.


Why OpenAI abruptly fired CEO and co-founder Sam Altman in November and then brought him back a few days later with a new board to replace the one that had fired him remains mostly unexplained. OpenAI hired the law firm WilmerHale to look into what transpired, but it’s not clear how much of an investigation it will conduct or how much of its results it will make public.

One of the main concerns is what OpenAI—under its former board of directors—meant when it said in November that Altman was “not consistently candid in his communications,” which made it more difficult for the board to carry out its duties. OpenAI is still run by a nonprofit board of directors whose job it is to further the company’s objective, even though it is now predominantly a for-profit enterprise.

According to Diane Rulke, a professor of organizational behavior and theory at Carnegie Mellon University, the investigators are most likely paying closer attention to that structure and the internal tensions that resulted in communication breakdowns.

According to Rulke, OpenAI should make at least some of the findings available to the public as it would be “useful and very good practice,” especially in light of the underlying worries about the potential social effects of AI in the future.

“Not just because it was a significant occasion, but also because OpenAI collaborates with numerous businesses and has a broad influence,” Rulke remarked. “Knowing what transpired at OpenAI is very much in the public interest, even though they are a privately held company.”


Antitrust authorities in the US and Europe have expressed interest in investigating OpenAI due to its close economic links to Microsoft. Microsoft has poured billions of dollars into OpenAI and turned on its massive processing capacity to support the development of the AI models created by the smaller startup. Additionally, the software behemoth has obtained exclusive rights to incorporate a large portion of the technology into Microsoft goods.

Such partnerships, in contrast to large corporate mergers, do not invariably result in a government assessment. However, as FTC Chair Lina Khan stated in January, the agency is interested in learning if these agreements “allow dominant firms to exert undue influence or gain privileged access in ways that could undermine fair competition.”

The Federal Trade Commission (FTC) is awaiting answers to “compulsory orders” it sent to both businesses, Anthropic, an OpenAI rival, and its own cloud computing supporters, Amazon and Google, demanding that they reveal details about the collaborations and the decision-making around them. The corporations have until as early as next week to respond. The United Kingdom and the European Union are under comparable scrutiny.


Prominent novelists, nonfiction writers, the New York Times, and other media organizations have filed lawsuits against OpenAI on the grounds that the business broke copyright rules when developing the AI large language models that underpin ChatGPT. Microsoft is also a target of several of the litigation. (The AP used a different strategy to close a contract last year that allows OpenAI to access the AP’s text archive for an undisclosed amount.)

OpenAI has maintained that the “fair use” copyright law idea protects their practice of using vast collections of writings from the internet to train AI models. Grisham, comedian Sarah Silverman, and author of Game of Thrones George R. R. Martin are among the many plaintiffs whose proof of harm federal judges in New York and San Francisco must now sift through.

High stakes are involved. For example, The Times is requesting that a judge order the “destruction” of all of OpenAI’s GPT large language models, which form the basis of ChatGPT and the majority of the company’s revenue stream, provided that the models were trained on its news stories.