​The fear of destructive AI appears after the return of Sam Altman

the fear of destructive ai appears after the return of sam altman 6560e721201c0 | Dang Ngoc Duy

“The board can fire me,” Sam Altman told Bloomberg in June this year, referring to OpenAI’s unusual operating system . Five months later, what he said happened on November 17. The company’s structure allows anyone, including the CEO and chairman, to be fired for actions that “harm the interests of humanity” in the board’s view. This is considered a unique point, helping OpenAI to be a non-profit organization in AI research, but still able to mobilize investment capital to build expensive machines and attract talent.

“But it turns out, they still can’t fire him. Too bad,” said Toby Ord, a researcher at Oxford University and a voice among experts warning that AI could pose existential risks for humanity, commented.

From the disturbing letter

Why OpenAI’s board of directors ousted Altman remains a mystery. Some sources say Altman spent too much time on side projects, was too dependent on Microsoft, and that the board of directors wanted to prevent him from commercializing the general AI model ( AGI ) – a super intelligent system with can be harmful to humans.

Meanwhile, Reuters quoted an internal source as saying that on November 23, OpenAI CTO Mira Murati told her staff that a confidential letter sent by OpenAI researchers to the board of directors, warning of the danger The risk of the Q* (Q-Star) project led to the sudden decision to lay off.

Sam Altman, CEO of OpenAI. Photo: TechCrunch

Sam Altman, CEO of OpenAI. Photo: TechCrunch

The detailed content of the letter has not been disclosed, but experts believe that OpenAI has achieved a major breakthrough, moving towards the ability to create AGI – a “super intelligent” AI model that can surpass humans in most tasks. tasks have economic value.

An internal source later told Reuters that Q* was still “at elementary school level”, but the future was wide open as its self-learning capabilities were being improved at an “hourly” rate. Another source said the algorithm on Q* can “surpass high school math” and OpenAI researchers are optimistic about the project’s success.

Currently AI models are very good at writing and translating languages, but also give many different answers to the same question. Depending on the data source it is trained on, it may invent facts and the answer may not always be correct. But with advanced mathematical understanding, AI can calculate and give a single correct answer, which means it has better reasoning ability, similar to human intelligence.

Scientists believe AGI’s mathematical thinking can be applied to new research. Unlike a computer that can solve a limited number of calculations, an AGI can generalize, learn, and even “understand” what it is doing.

The secret letter mentioned the “potential danger and power” that Q* could pose. Previously, OpenAI was also said to have flagged another research work created by the “AI Scientist” group. The existence of the group is confirmed by several internal sources, with a mission to explore how to optimize existing AI models to improve inference and ultimately perform scientific work.

In the last days before being overthrown, Sam Altman also mentioned that OpenAI had made great progress. “Of the four times OpenAI made history, the last was just a few weeks ago. I was in the room. We almost pushed back the darkness ahead and opened up new frontiers of discovery. We did it. That is the professional honor of a lifetime,” Altman said at the Asia-Pacific Economic Cooperation (APEC) Forum on November 16 in San Francisco.

A day later, he was fired by the OpenAI board via Google Meet.

A new OpenAI is more dangerous

Altman’s return to OpenAI is currently causing mixed reactions. Having a tool like Q* in hand and no longer being blocked by the old board of directors can push this CEO to commercialize products faster.

Altman repeatedly talked about the risk of AI harming the world , but at the same time mobilized billions of dollars to develop super intelligent models. This makes powerful figures in Silicon Valley uncomfortable.

“Altman’s emergence as the face of ‘apocalyptic AI’ makes many people concerned about the risks, especially as AI becomes more powerful and accessible,” Wired commented. essay.

According to Carnegie Mellon University professor Rayid Ghani, Altman on the one hand says “regulate us, or we will destroy the world”, but on the other hand strives to make super AI even more powerful. “I think it completely distracts us from the real risks that are happening like job displacement, discrimination, transparency and accountability,” Ghani told Wired .

OpenAI said the new board of directors remains “interim” for now and plans to add several others to the list. Although Altman said he “will keep OpenAI operating as before”, some experts suspect that when there are no more people standing in the way, he will move closer to Microsoft or conduct his own projects towards commercialization instead of philanthropy. profit.

“Now, things are no longer a competition between laboratories – where the people who set them up really care about meaning, about what they can do,” said Oxford University researcher Toby Ord. identify. “The near future will be a race between some of the world’s largest AI companies. In this aspect, I think things get quite dangerous.”

Bao Lam

Leave a Reply

en_USEN