December 11, 2023 0 288

Ilya Sutskever — the Gray Cardinal of OpenAI. Who Stands Behind the Much-Discussed Dismissal of Two Founders of the Company?

Last Friday, on November 17th, two co-founders and prominent figures of the startup — Sam Altman and Greg Brockman — were fired from OpenAI. According to several sources close to the situation, Ilya Sutskever, the Chief Scientist and co-founder of OpenAI, led the downsizing. In a previous piece, we reconstructed the sequence of events following this significant decision, and today, we'll delve into the figure standing behind it.

To read more case studies and other useful information on affiliate marketing and money making in general, visit our Telegram channel.

Sutskever was born in 1984 in Nizhny Novgorod. He grew up in Jerusalem, where his family emigrated when he was five. The relocations didn't stop there. As a teenager, Ilya and his family emigrated to Canada. He received his education at the University of Toronto under the guidance of Geoffrey Hinton — a pioneer in the field of artificial intelligence, often referred to as the "godfather of AI." In 2012, Hinton and two of his Ph.D. students — Alex Krizhevsky and Ilya Sutskever — developed an AI tool trained to recognize objects in photos. The project, named "AlexNet," demonstrated that AI tools excel at image recognition far beyond previous algorithms. The AI tool, trained on a million images, outperformed all existing algorithms at that time. Together, they founded the company "DNNresearch Inc."

In the photo from left to right: Ilya Sutskever, Alex Krizhevsky, Geoffrey Hinton.

A few months after the launch, Google acquired "DNNresearch" along with all its developments.The founders were hired by Google to continue their research. Working at the tech giant, Sutskever demonstrated that the same type of image recognition used by "AlexNet" could work with words.

Soon, Ilya attracted the attention of another influential player in the field of artificial intelligence — Elon Musk. The billionaire had long warned about the potential danger that AI poses to humanity. Specifically, he was concerned that Google co-founder Larry Page wasn't concerned about AI safety. He labeled the concentration of AI professionals working at Google as risky.

Agreeing with Musk's ideas, Sutskever left Google in 2015 to become a co-founder and Chief Scientist of OpenAI. The startup was envisioned as a non-profit organization that Musk considered a counterbalance to Google in the field of artificial intelligence.

"It was one of the hardest recruiting battles I've ever fought, but it turned out to be the key to OpenAI's success," Musk said.

In addition, the billionaire added that Ilya is not only intellectually gifted but also a "good person with a kind heart."

At OpenAI, Sutskever played a crucial role in developing the language models GPT-2, GPT-3, and the text-to-image transformation model DALL-E.

At the end of 2022, ChatGPT was released, which attracted 100 million users in less than two months and sparked the AI boom. Sutskever told Technology Review that the chatbot gave people an idea of what is possible with artificial intelligence, even if it sometimes disappoints with incorrect results.

What preceded the dismissal of OpenAI founders?

Earlier this year, Geoffrey Hinton left Google. The "godfather of AI" warned that companies engaged in developments in this field were not paying attention to the associated risks, aggressively creating tools of generative artificial intelligence, such as OpenAI's ChatGPT.

"...what's going on in these systems is much more complicated than what's happening in the human brain. Look at what was happening in AI research five years ago and what's happening now. Imagine how rapidly the changes will occur in the future. It's scary. It's hard to imagine how to prevent malevolent uses of it..."

Sutskever himself focused his attention on the potential dangers of AI. According to him, superintelligence could emerge within 10 years, and the issue of its regulation needs to be addressed in advance.

Ilya Sutskever, tweet from February 10, 2022 (9 months before the public release of ChatGPT): "It's possible that today's large AI tools are already partially conscious."

"It's important to ensure that any artificial intelligence created eventually doesn't become harmful," Sutskever said in an interview with Technology Review.

In July, Ilya and his colleague, Jan Leike, published a blog post on OpenAI. They warned that while superintelligence could help "solve the most pressing global challenges," it could also be "very dangerous, leading humanity to lose its capabilities or even disappear entirely."

Title of the blog post authored by Sutskever and Leike:

“We need scientific and technical breakthroughs to control artificial intelligence systems that are much smarter than us. To address this problem over the next four years, we are forming a new team led by Ilya Sutskever and Jan Leike, allocating 20% of our existing computing resources for these tasks. We are looking for researchers and engineers to join us.”

Last month, Ilya Sutskever, who usually avoids media attention, gave several extensive interviews: one to MIT Technology Review and another to The Guardian. In these interviews, he reiterated his concerns regarding the safety of artificial intelligence development.

According to anonymous sources, a key factor influencing the change in leadership at OpenAI was precisely the question of safety. Sutskever disagreed with Altman on how quickly OpenAI's products were being commercialized and what steps should be taken to mitigate the potential harm of the technology.

According to the data from The Information, after Altman and Brockman were dismissed, OpenAI employees asked Ilya Sutskever whether it was a "coup" or "hostile takeover" by other board members.

"You could call it that. And I can understand why you chose that word. But I disagree with it. It was decided by the board, fulfilling its duty to the mission of a non-profit organization. The mission is to create artificial intelligence at OpenAI that will benefit all of humanity."

Unexpected Consequences

On Sunday, the 19th, investors led by Microsoft CEO Satya Nadella pressured the OpenAI board, supporting Sam Altman and insisting on his return to the company. The company's board began negotiations. Sam presented conditions that the entire board should leave OpenAI. The board accepted this condition but missed the deadline set by Altman. On the morning of November 20th, Sam announced that he rejects the offer to return to OpenAI as CEO. Instead, he and Greg Brockman are joining Microsoft's AI research team.

Following this news, 505 out of 700 OpenAI employees wrote an open letter to the company's leadership demanding their resignation.

"Your actions indicate that you are no longer able to manage OpenAI... We, the undersigned, can leave the company and join Microsoft along with Sam Altman and Greg Brockman. Microsoft has confirmed that there will be places for everyone in the new division. If the entire board of directors does not resign, we will accept their offer."

Ilya Sutskever is the 12th signatory on the letter.

In his X profile, he later posted the following:

Ilya Sutskever: "I deeply regret my involvement in the actions of the board. I did not want to harm OpenAI. I love everything we have created together, and I will do everything in my power to reunite the company."

Sam Altman and Greg Brockman quoted the post, commenting with meaningful heart emojis.

Was Ilya Sutskever the ideological inspiration for this dismissal but got scared of unpredictable consequences for the company, or did he become just a pawn in the board's larger game? We are unlikely to learn the behind-the-scenes details of what happened before Netflix makes a biopic about this story.

UPD: 700 out of 770 company employees have signed an open letter to the management demanding their resignation. OpenAI investors are taking the board to court.

How do you like the article?