It would be easy to dismiss Elon Musk's lawsuit against OpenAI as a sour grapes lawsuit.
Musk sued OpenAI this week, accusing it of violating the terms of its founding agreement and violating its founding principles. He said OpenAI was founded as a nonprofit organization to build powerful AI systems for the benefit of humanity and make that research freely available to the public. But Musk claims that OpenAI broke that promise by taking billions of dollars in investment from Microsoft and creating a commercial subsidiary.
An OpenAI spokesperson declined to comment on the lawsuit. Jason Kwon, the company's chief strategy officer, denied Musk's claims in a memo sent to employees on Friday, saying, “The allegations in this lawsuit are based on Elon's failure to engage with the company today. I believe this may have stemmed from regret on his part.'' On a copy of the notes I viewed.
In some ways, this lawsuit smells personal. Musk, who founded OpenAI in 2015 with a group of other tech heavyweights and provided much of its initial funding, but left the company in 2018 amid disputes with leadership, has been sidelined in the AI conversation. I'm angry at what's happening. As OpenAI's flagship chatbot, he is as popular as ChatGPT. Musk's feud with OpenAI CEO Sam Altman is also well-documented.
But amidst all the hostility, there are points worth drawing out. Because it illustrates the paradox at the heart of many of today's conversations about AI. It's also where OpenAI really speaks from both sides of its mouth. AI systems are incredibly powerful and claim to be nowhere near human intelligence.
This claim focuses on the term known as AGI, or “artificial general intelligence.” AGI is notoriously difficult to define, but most would agree that AGI refers to an AI system that can do most or all of the things the human brain can do. While Altman defines AGI as “the median human equivalent of a human being who could be employed as a colleague,” OpenAI itself defines AGI as “a person with a high degree of autonomy that exceeds that of humans in the most economically valuable tasks.” system.
Most AI company leaders claim that building AGI is not only possible, but also imminent. Demis Hassabis, CEO of Google DeepMind, said in a recent podcast interview that he believes AGI could be realized as early as 2030. Altman says AGI may be just four to five years away.
Building AGI is a clear goal for OpenAI, and there are many reasons why we want to get there before anyone else. True AGI would be an incredibly valuable resource that could automate vast amounts of human labor and generate enormous profits for its creators. This is also the kind of shiny, audacious goal that investors love to fund, and it helps AI institutes recruit top engineers and researchers.
However, AGI can also become dangerous if it outsmarts humans, becomes deceptive, or out of step with human values. Those who launched his OpenAI, including Mr. Musk, feared that AGI would be too powerful to be owned by a single entity, and that if they ever came close to building it, they would have to change the control structure around it. I was concerned that I might need to change it. This is to prevent harm and too much concentration of wealth and power in the hands of a single company.
That's why when OpenAI partnered with Microsoft, it specifically gave the tech giant a license that only applied to “pre-AGI” technology. (The New York Times sued Microsoft and his OpenAI over use of copyrighted material.)
According to the terms of the agreement, if OpenAI develops something that meets the definition of AGI (as determined by OpenAI's nonprofit board of directors), Microsoft's license no longer applies and OpenAI's board of directors has no interest in OpenAI's AGI. You can decide to do whatever it takes to ensure that. The entire human race. That could mean a number of things, including open sourcing the technology or shutting it down completely.
Most AI critics believe that today's state-of-the-art AI models lack advanced reasoning skills and frequently make significant mistakes, so they do not qualify as AGI.
But in his legal filing, Musk makes an unusual claim. He claims that OpenAI: already Having achieved AGI with the GPT-4 language model released last year, the company's future technologies will more specifically qualify as AGI.
“According to information and belief, GPT-4 is an AGI algorithm and therefore expressly falls outside the scope of Microsoft's September 2020 exclusive license to OpenAI,” the complaint says.
What Musk is making here is a bit complicated. Basically he has achieved his AGI with GPT-4, which means that OpenAI will no longer be able to license it to his Microsoft and will be able to use its technology and research more freely. board of directors.
His complaint cites the now-infamous “AGI Spark” paper written by a Microsoft research team last year, in which GPT-4 showed early hints of general intelligence, including argue that it also contains signs of human-level reasoning.
However, the complaint also notes that it is unlikely that OpenAI's board of directors would determine that the company's AI systems: actually The reason we are certified as AGI is that as soon as we become AGI certified, we have to make major changes to the way we deploy and benefit from the technology.
He also said that Microsoft, which now has a non-voting observer seat on OpenAI's board after Altman was furloughed during last year's unrest, has no idea that OpenAI's technology qualifies as AGI. He pointed out that there is a strong motive to deny this. They can license the technology to use in their products, putting huge profits at risk.
“Given Microsoft's significant financial interests in remaining closed to the public, OpenAI, Inc.'s newly captured, inconsistent, and compliant board of directors is unable to ensure that OpenAI achieves AGI,” the complaint states. There may be good reasons for delaying issuing a finding that it did.” “On the contrary, like in 'Tomorrow' from 'Annie,' OpenAI always achieves his AGI one day after he does.”
It's easy to question Musk's motives here, given his track record of questionable litigation. And as the head of a competing AI startup, it's no wonder he wants to drag his OpenAI into a nasty lawsuit. But his lawsuit presents a real challenge for OpenAI.
Like its competitors, OpenAI is keen to be seen as a leader in the race to build AGI, and has a vested interest in convincing investors, business partners and the general public that its systems are improving at a breakneck pace. have an interest.
However, due to the terms of its contract with Microsoft, OpenAI's investors and management may not want to acknowledge that the technology actually qualifies as AGI.
That put Musk in the strange position of asking a jury to decide what constitutes AGI and whether OpenAI's technology meets the standards.
The lawsuit also puts OpenAI in the strange position of downplaying its own system's capabilities while continuing to fuel hopes that major advances in AGI are just around the corner.
“GPT-4 is not AGI,” OpenAI's Kwon wrote in a memo to employees Friday. “Although GPT-4 can solve small tasks in many jobs, the ratio of work done by humans to work done by GPT-4 in the economy remains alarmingly high.”
The personal feud that led to Musk's charges has led some to view the lawsuit as a frivolous one. One commenter likened the lawsuit to “suing her ex-boyfriend for renovating her house after her divorce,” which would likely be quickly dismissed.
But even if it is dismissed, Musk's case raises an important question: Who decides when something qualifies as AGI? Are technology companies exaggerating or sandbagging (or both) when describing the capabilities of their systems? And how much more do we think about AGI? What incentives lie behind the various claims about proximity or distance from AGI?
A lawsuit from a billionaire with a grudge is probably not the right way to resolve these questions. But they are good questions to ask, especially now as advances in AI continue to accelerate.