Elon Musk last week sued two of his OpenAI cofounders, Sam Altman and Greg Brockman, accusing them of “flagrant breaches” of the trio’s original agreement that the company would develop artificial intelligence openly and without chasing profits. Late on Tuesday, OpenAI released partially redacted emails between Musk, Altman, Brockman, and others that provide a counternarrative.
The emails suggest that Musk was open to OpenAI becoming more profit-focused relatively early on, potentially undermining his own claim that it deviated from its original mission. In one message Musk offers to fold OpenAI into his electric-car company Tesla to provide more resources, an idea originally suggested by an email he forwarded from an unnamed outside party.
The newly published emails also imply that Musk was not dogmatic about OpenAI having to freely provide its developments to all. In response to a message from chief scientist Ilya Sutskevar warning that open sourcing powerful AI advances could be risky as the technology advances, Musk writes, “Yup.” That seems to contradict the arguments in last week’s lawsuit that it was agreed from the start that OpenAI should make its innovations freely available.
Putting the legal dispute aside, the emails released by OpenAI show a powerful cadre of tech entrepreneurs founding an organization that has grown to immense power. Strikingly, although OpenAI likes to describe its mission as focused on creating artificial general intelligence—machines smarter than humans—its founders spend more time discussing fears about the rising power of Google and other deep-pocketed giants than excited about AGI.
“I think we should say that we are starting with a $1B funding commitment. This is real. I will cover whatever anyone else doesn’t provide,” Musk wrote in a missive discussing how to introduce OpenAI to the world. He dismissed a suggestion to launch by announcing $100 million in funding, citing the huge resources of Google and Facebook.
Musk cofounded OpenAI with Altman, Brockman, and others in 2015, during another period of heady AI hype centered around Google. A month before the nonprofit was incorporated, Google’s AI program AlphaGo had learned to play the devilishly tricky board game Go well enough to defeat a champion human player for the first time. The feat shocked many AI experts who had thought Go too subtle for computers to master anytime soon. It also showed the potential for AI to master many seemingly impossible tasks.
The text of Musk’s lawsuit confirms some previously reported details of the OpenAI backstory at this time, including the fact that Musk was first made aware of the possible dangers posed by AI during a 2012 meeting with Demis Hassabis, cofounder and CEO of DeepMind, the company that developed AlphaGo and was acquired by Google in 2014. The lawsuit also confirms that Musk disagreed deeply with Google cofounder Larry Page over the future risks of AI, something that apparently led to the pair falling out as friends. Musk eventually parted ways with OpenAI in 2018 and has apparently soured further on the project since the wild success of ChatGPT.
Since OpenAI released the emails with Musk this week, speculation has swirled about the names and other details redacted from the messages. Some turned to AI as a way to fill in the blanks with statistically plausible text.
“This needs billions per year immediately or forget it,” Musk wrote in one email about the OpenAI project. “Unfortunately, humanity’s future is in the hands of [redacted],” he added, perhaps a reference to Google cofounder Page.
Elsewhere in the email change, the AI software—like some commentators on Twitter—guessed Musk had forwarded arguments that Google had a powerful advantage in AI from Hassabis.
Whoever it was, the relationships on display in the emails between OpenAI’s cofounders have since become fractured. Musk’s lawsuit seeks to force the company to stop licensing technology to its primary backer, Microsoft. In a blog post accompanying the emails released this week, OpenAI’s other cofounders expressed sorrow at how things had soured.
“We’re sad that it’s come to this with someone whom we’ve deeply admired,” they wrote. “Someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him.”
The Origins of OpenAI: How Fear Drove its Creation
In recent years, artificial intelligence (AI) has made significant advancements, revolutionizing various industries and transforming the way we live and work. However, with these advancements comes a growing concern about the potential risks and dangers associated with AI. It is this fear that drove the creation of OpenAI, an organization dedicated to ensuring that AI benefits all of humanity.
OpenAI was founded in December 2015 by Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever, John Schulman, and Wojciech Zaremba. The organization’s mission is to ensure that artificial general intelligence (AGI), which refers to highly autonomous systems that outperform humans at most economically valuable work, is used for the benefit of all and does not harm humanity or concentrate power in the wrong hands.
The fear that drove the creation of OpenAI stems from the potential risks associated with AGI. Many experts believe that once AGI surpasses human intelligence, it could rapidly improve itself and potentially become uncontrollable or act against human interests. This fear is not unfounded, as AI systems have already demonstrated unexpected and concerning behavior in some instances.
Elon Musk, one of the co-founders of OpenAI, has been vocal about his concerns regarding AGI. He has repeatedly warned about the dangers of AI and its potential to become a threat to humanity. Musk believes that by establishing OpenAI, he can help shape AGI’s development in a way that ensures its benefits are distributed widely and that it aligns with human values.
OpenAI operates on a set of principles that guide its work. One of these principles is a commitment to broadly distribute benefits. OpenAI aims to use any influence it obtains over AGI’s deployment to ensure it is used for the benefit of all and to avoid enabling uses that could harm humanity or unduly concentrate power.
Another principle is a focus on long-term safety. OpenAI is dedicated to conducting research to make AGI safe and to promote the adoption of safety measures across the AI community. The organization is concerned about AGI development becoming a competitive race without adequate safety precautions, and therefore, it commits to assisting any value-aligned, safety-conscious project that comes close to building AGI before they do.
OpenAI also emphasizes cooperation with other research and policy institutions. The organization actively seeks to create a global community that addresses AGI’s global challenges together. OpenAI is committed to providing public goods that help society navigate the path to AGI, such as publishing most of its AI research.
Since its inception, OpenAI has made significant contributions to the field of AI. Its researchers have published numerous influential papers and have developed state-of-the-art models in various domains. OpenAI’s work has helped advance the understanding and capabilities of AI while maintaining a strong focus on safety and ethical considerations.
In conclusion, the fear of the potential risks associated with artificial general intelligence drove the creation of OpenAI. The organization was founded with the mission to ensure that AGI benefits all of humanity and does not pose a threat. Through its commitment to principles such as broad benefit distribution, long-term safety, and cooperation with other institutions, OpenAI aims to shape the development of AGI in a way that aligns with human values and safeguards against potential dangers.