Clicky chatsimple

Elon Musk And OpenAI Are Still Slinging Barbs

Category :


Posted On :

Share This :

The billionaire poses a legitimate query in his late-month breach of contract lawsuit against OpenAI: Why does the nonprofit organization behave so much like a for-profit one?

Following ChatGPT’s public release and the subsequent tech-related frenzy, OpenAI has been working feverishly to roll out a steady stream of enhancements to its large language models (LLMs). Over the past year, the corporation has doubled the size of its PR office and increased its lobbying operations in Washington. Musk is especially troubled by OpenAI’s habit of regarding its research as proprietary knowledge that should be kept secret as a commercial advantage, even from other researchers.

Originally a charity, OpenAI eventually changed to an uncommon corporate structure where its for-profit business was overseen by a nonprofit board. That structure has persisted in the face of the controversy surrounding CEO Sam Altman’s November firing and reappointment, as well as mounting calls for the business to abolish the nonprofit.

Consider contributing to a nonprofit organization that claims to be dedicated to preserving the Amazon rainforest, but in reality, the organization establishes a profitable logging business in the region that exploits the funds it receives to remove the jungle. According to the lawsuit, it is OpenAI, Inc.’s story.

The company’s pursuit of artificial general intelligence (AGI), or AI models with superior intelligence to humans over a wide range of tasks, is crucial to the lawsuit and to OpenAI’s defense of its for-profit subsidiary.

In response, OpenAI reacted in a blog post on Tuesday, saying that raising enough money to pursue AGI requires its for-profit company. Executives from the firm stated, “We realized early in 2017 that building AGI will require vast quantities of compute.” “We all realized that in order to fulfill our mission, we would require billions of dollars annually—much more than any of us, particularly Elon, had anticipated being able to raise as a non-profit.”

But according to Musk’s lawsuit, AGI is a risky endeavor in and of itself. According to the lawsuit, “[W]here some, like Mr. Musk, see an existential threat in AGI, others see AGI as a source of profit and power.”

For its part, OpenAI asserts that Musk was aware that limiting access to the models was a component of the strategy. The blog entry states, “Elon understood the mission did not imply open-sourcing AGI.” “As Ilya informed Elon, it will make sense to start being less transparent as we get closer to developing AI. Elon answered, “Yeah,” to the statement that “The Open in openAI means that everyone should benefit from the fruits of AI after it’s built, but it’s totally OK to not share the science.” OpenAI claims that by giving customers access to technologies like ChatGPT, it has remained faithful to its goal of enabling the many, not the few, to benefit from AI.

Open-source proponents contend that making the models’ source codes available to the academic community is the best approach to comprehend and control the risks—including bias—associated with huge frontier models. The blog post from OpenAI states that the company will “move to dismiss all of Elon’s claims” in court.


A recent Edelman survey of customers across 28 countries reveals some startlingly negative opinions about artificial intelligence (AI) and AI-related businesses. “Rapid innovation offers the promise of a new era of prosperity, but instead risks exacerbating trust issues, leading to further societal instability and political polarization,” the paper by Edelman’s researchers states, highlighting a new paradox revealed by their findings. The primary conclusions that directly pertain to AI are as follows:

Just half of respondents say they trust AI, compared to 75% who say they trust the tech sector.
Over the previous five years, trust in AI firms has decreased globally, from 61% to 53%. There has been a 15-point decline in the United States, from 50% to 35%.
Republicans and Independents trust AI businesses at a rate of 24% and 38%, respectively. Republicans and Democrats alike have a 30-point difference in trust between AI businesses and tech corporations (66 versus 38 for Democrats and 55 versus 24 for Republicans).
Respondents from Germany, Australia, the Netherlands, Sweden, France, Canada, Ireland, and the United Kingdom reject the increasing use of AI by a three-to-one ratio. In comparison, two or three out of every three respondents in developing markets like Saudi Arabia, India, China, Kenya, Nigeria, and Thailand support the expanding usage of artificial intelligence.
The percentage of respondents who fear AI’s effects on job security is just 19%. Concerns about privacy (39%), that AI would diminish the value of humanity (38%), and that AI might be dangerous for humans (37%) are among their worries.
Concerns about potential harm to society (61%), privacy compromise (52%), and inadequate testing and evaluation of AI (54%), among other issues, are significantly greater in the United States.


Mistral AI, a burgeoning artificial intelligence business, announced on Tuesday that it will make its extensive language models accessible via the Snowflake data cloud. This comprises the two open-source language models that the business released last year, Mistral 7B and Mixtral 8x7B, in addition to its most recent Mistral Large and Medium models.

Snowflake thinks that users will benefit from improved data security and privacy if they can access a cutting-edge LLM in the same cloud as enterprise data. Although it remained silent about the size of the stock it purchased, Snowflake said that its venture arm was taking part in Mistral’s Series A funding round, which is based in Paris.

According to Mistral CEO Arthur Mensch, “the majority of the interesting use cases of AI are leveraging the reasoning capacities of large language models like Mistral’s, and some appropriate type of data like that Snowflake is hosting.” “There are some really intriguing synergies there.”

Since its inception in June 2023, Mistral has positioned itself as an open-source LLM provider; yet, consumers of Snowflake data must pay for access to its closed and proprietary Mistral Large and Mistral Medium models. Mensch states, “Obviously, because we are developing a business, we have premium offerings like Mistral Large and Mistral Medium.” “And those are the things that we are implementing on Snowflake, so even though it isn’t open-source, it is accessible in places where there wasn’t previously an LLM.”

CEO of Snowflake Sridhar Ramaswamy notes that while his business has hosted Meta’s Llama 2 open-source model and made lesser language models available to its data customers, Mistral is the first time it will be able to provide its clients with a cutting-edge model like Large. Prior to being named CEO a week ago, Ramaswamy, who formerly oversaw Google’s advertising division, had been in charge of Snowflake’s AI strategy since he was hired last year.

He claims that there is “literally no work involved in accessing Mistral through Snowflake.” “There isn’t even an API—a programmer is needed for an API. There is no labor involved because a language model can be called using a (widely used) computer language like SQL.

On top of Mistral models, which will be supported by Snowflake’s Cortex platform, users will be able to swiftly develop apps. Cortex offers additional capabilities related to data and LLMs, such as security, privacy, compliance, and governance