December 09, 2023 0 491

Scandal at OpenAI: Founders Return, Lingering Tension Persists

Last week, the eyes of the world were glued to one of the most significant scandals in the tech industry: the board of directors at OpenAI dismissed two of the company's founders, Sam Altman and Greg Brockman. Investors pressured the board, demanding a reversal of their decision. Altman laid out his conditions, which the board agreed to, but he missed the deadline. Subsequently, Sam declined the offer to return and began negotiations to join Microsoft.

The entire OpenAI team showed solidarity — 700 out of 770 employees signed an open letter demanding the entire board step down, or they would follow Altman under the wing of Microsoft, which promised job positions for all interested. As a result, after 5 days of the dramatic dismissal, Altman and Brockman returned to OpenAI, and most board members lost their positions. The scandal exposed several behind-the-scenes problems in the AI industry leader and spawned numerous speculations and rumors. Partnerkin analyzes which ones can be trusted.

The Danger of Creating Artificial Intelligence

According to information from Reuters, before Sam Altman's dismissal, several key researchers at the company wrote a letter to the board, warning about the development of artificial intelligence that could pose a threat to humanity.

Sources named this letter as one of the factors leading to Altman's dismissal.

The secret project at OpenAI, mentioned in the letter, was codenamed Q*—"Ku-Star." The new model managed to solve some mathematical problems. An anonymous Reuters source reported that despite Q* performing mathematics only at the level of elementary school students, its successful test results inspired researchers with great optimism. Some at OpenAI believe Q* could be a breakthrough in the research of general artificial intelligence — AGI.

Sam did not inform the board about this. Apparently, the consequences of this breakthrough were so frightening that the board attempted to remove Altman and merge with the competitor "Anthropic," known for its caution in AI development. This is indirectly confirmed by the blog post from OpenAI, announcing Altman's dismissal because "Altman was not always forthright in communication with the board."

In addition to presenting new tools at DevDay in early November, at the Asia-Pacific Economic Cooperation Summit in San Francisco, Altman stated that significant achievements are closer than everyone thinks:

"Something has qualitatively changed. Now I can talk to this thing. It's like the computer from 'Star Trek' they always promised me… In the entire history of OpenAI, I've had to be in a special room only four times, where we discuss new discoveries. And the recent meetings happened in the last few weeks."

The next day, the board fired Altman.

Founder's Figure

Sam Altman — the leader of the company that created ChatGPT — is perceived by many as a messiah, the Steve Jobs of the AI tools world. But those close to him describe the OpenAI founder in a contradictory way.

You can throw him on a cannibal island, come back in 5 years, and he'll be the king there. Sam Altman doesn't necessarily have to bring in profit to convince investors that he will succeed with them or without them.

This is what his mentor Paul Graham said about Sam in 2008 — the founder of Y Combinator. In 2014, Graham appointed the 28-year-old Altman as the CEO of his incubator, surprising many industry leaders. Five years later, he flew from the UK to San Francisco to oust his protege. Graham feared that Altman was putting his own interests above the organization's.

A former OpenAI employee, machine learning researcher Jeffrey Irving, who now works at Google DeepMind, wrote that he is not inclined to support Altman after two years of working under him:

 

1. He has always been kind to me.

2. He often lied to me in various situations.

3. He deceived, manipulated, and treated other employees, including my close friends, poorly.

In early October, Eliezer Yudkowsky, an artificial intelligence specialist and leader of the "Pause AI" movement, published an article on LessWrong. He collected dozens of posts from Annie Altman, Sam's sister, claiming that she has been subjected to various forms of violence from her brother throughout her life.

Yudkowsky failed to contact Annie, but there are several indirect confirmations that the posts were published by Sam Altman's sister. The article compiles all of Annie's public statements about Sam and her psychological health in recent years.

It should not be ruled out that this could be part of the "Pause AI" campaign aimed at discrediting public figures associated with AI research. However, this information became the subject of discussion again against the backdrop of Altman's high-profile dismissal and return to OpenAI. Even Elon Musk joined the discussion. The billionaire and one of the first investors in OpenAI commented on Ashley Saint-Clair's post, where she expressed her outrage that the media did not report on accusations from Annie Altman:

"Russell Brand is perceived as a threat to the establishment media, and Sam is not."

  • Russell Brand — a comedian who was accused of sexual violence in September.

Conflict of Interests

William Ligat, a serial entrepreneur in the technology sector and CEO of "Good Pillows," wrote the following post on his X profile:

As some still don't understand what's going on, let me introduce you to the 'leading' board member — Adam D'Angelo. Adam owns 'Poe,' a company that became irrelevant due to GPTs. Adam was furious that the board did not inform him in advance. He manipulated board members, claiming there's an existential AI risk, and Sam concealed important information. Adam's goal was a hostile takeover of OpenAI. Adam was the instigator of it all.

Indeed, D'Angelo owns "Poe," which provides access to various AI tools through a convenient chatbot. In the quoted post, D'Angelo presents the monetization of unique bots based on ChatGPT. Two weeks after this announcement, OpenAI launched customizable GPT bots and its own app store, fully replicating D'Angelo's idea.

D'Angelo responded to these accusations by silently blocking Ligat on the X social network.

It is noteworthy that Adam D'Angelo is the only board member of OpenAI who retained his position.

At the time of Sam and Greg's dismissal, the board of directors included:

  • Ilya Sutskever, Chief Scientist;
  • Adam D'Angelo, founder of Poe and CEO of Quora;
  • Tasha McCollie, tech entrepreneur;
  • Helen Toner, director of the Georgetown Center for Security and Technology.

After the triumphant return of the founders, the board composition was updated:

Seemingly trying to affirm D'Angelo's reputation, on Thanksgiving Day, Sam Altman posted that he spent the holiday with Adam.

Sam Altman: "Just spent some great hours with Adam D'Angelo. Happy Thanksgiving."

Users on X reacted to this with replies and memes:

User AutismCapital: "Adam D'Angelo's Thanksgiving photoshoot."

User FreddieRaynolds: "Is he still alive?"

Misogyny Accusations

Let's once again focus on the changed board of directors. Before the fateful scandal, a third of the board consisted of influential women in the tech industry.

One of them, Tasha McCollie, is a renowned robotics engineer and CEO of "GeoSim Systems" — a company developing corporate software for complex systems.

Another, Helen Toner, is a distinguished AI researcher overseeing CSET's AI policy research. Helen studied the AI situation in China, making her an authoritative expert on AI's consequences for national security. In a recent article for CSET, Helen emphasized the importance of finding new AI testing methods, advocated for sharing information about AI mishaps, and called for international cooperation to minimize risks.

Upon Helen's appointment to the OpenAI board, she posted on her blog, stating that the appointment "confirms our commitment to the safe and responsible deployment of technologies to ensure artificial intelligence benefits all of humanity."

Regarding her appointment, both founders, Sam Altman and Greg Brockman, expressed their views.

"I greatly appreciate Helen's reflections on the long-term risks and consequences of AI. I look forward to the impact she will have on our development," said Brockman.

"Helen brings an understanding of global AI development with a focus on safety... We are pleased to add her to our board of directors," Altman stated.

"I firmly believe in the organization's goal — creating artificial intelligence for the benefit of all, and it is a great honor for me to contribute to this mission," commented Helen Toner on her appointment to the OpenAI board.

There is a high probability that Toner's research on AI safety could have been a catalyst for the dismissals at OpenAI.

In her work, Helen criticizes OpenAI for releasing ChatGPT at the end of last year. According to her, this necessitated accelerated research efforts by other companies, prompting competitors to "bypass internal security and ethical review processes."

According to the "New York Times", Altman complained in an email that Helen's research is equivalent to criticism from a board member and added that it damages OpenAI. Toner defended her work as an impartial analysis of issues in AI development.

Sam Altman represents commercial aspirations underlying AI technology. Therefore, investors sided with him.

Toner reflects the effective altruism movement, which aims to maximize the benefits and limit the harm from AI.

It seems that the theatrical scandal we witnessed was a confrontation between two ideals, and as always, money emerged victorious.

It is noteworthy that the new board of directors does not include a single woman. Toner, despite her profound knowledge of AI safety, was left out.

Many researchers and AI scientists have expressed their opinions about the new board composition.

Noah Jansirakusa, a mathematics professor at Bentley University:

"The selection of new board members is not good, especially for a company that leads the industry. OpenAI's main goal is to develop AI that 'benefits all of humanity.' Since half of humanity is women, recent events do not inspire confidence. Helen Toner reflects AI safety."

Christopher Manning, director of the Artificial Intelligence Laboratory at Stanford:

"The newly formed OpenAI board still appears incomplete... A board composed only of white males is an ambiguous start for such an influential company."

OpenAI – A New Beginning

Events like these are undoubtedly beneficial for simplifying the images of global projects that are changing our world. After such scandals, people pay more attention to the humanity and imperfections of their founders. Did the "good guys" win in this conflict? History will judge. We will now pay closer attention to the ethics of OpenAI's actions. The future of humanity is in their hands.

For the latest news from the world of AI Tools, follow the AI Tools channel from Partnerkin on Telegram.

How do you like the article?