
13 OpenAI and Google employees sent a joint letter warning!
Recently, an open letter signed by 13 former and current employees of OpenAI and Google DeepMind has attracted widespread attention. The letter expresses concern about the potential risks of advanced AI and the current lack of regulation for AI technology companies.
In addition, the open letter mentions that AI may exacerbate existing inequalities, manipulate and disseminate misleading information, and may not be able to control autonomous AI systems, which could ultimately threaten human survival.
Among the signatories to the letter are Geoffrey Hinton, known as the “godfather of artificial intelligence,” Yoshua Bengio, who won the Turing Award for his groundbreaking AI research, and Stuart Russell, a scholar in the field of AI security.
AI technologies have the potential to bring unprecedented benefits to humanity, but at the same time, these technologies pose serious challenges, and governments and other AI experts around the world, as well as AI companies themselves, are already aware of these risks. However, AI companies often avoid effective regulation because of financial interests, and “we do not believe that specially designed corporate governance models are sufficient to change this situation.”
AI companies have a wealth of internal information about the capabilities and limitations of their systems, the adequacy of protections, and the risk levels of different types of harm. At present, however, they have a limited responsibility to share this information with Governments and no obligation to share it with civil society.
Current and former employees of these companies are among the few who can be held accountable to the public, but confidentiality clauses discourage the expression of such concerns.
The letter asks leading AI companies to commit to a number of principles, including a commitment not to enter into or enforce any agreement that prohibits negative evaluation or criticism of a company’s risk-related concerns, or to retaliate against employees for making risk-related criticism that impeding their vested economic interests.
The joint letter wants to create a verifiable anonymity mechanism for current and former employees to use.
Daniel Kokotajlo, a former OpenAI employee, is one of the public signatories of the letter. “Some of us who recently resigned from OpenAI have come together to demand a broader commitment to transparency from the lab,” he said in a post on the social media platform. In April, Daniel resigned from OpenAI, citing, among other reasons, a loss of confidence that the company would act responsibly in building general AI.
Daniel mentioned that AI systems are not ordinary software, they are artificial neural networks that learn from large amounts of data. The scientific literature on interpretability, alignment, and control is growing rapidly, but these areas are still in their infancy. The systems that LABS like OpenAI are building have enormous benefits. But if not careful, it can be destabilizing in the short term and disastrous in the long term.
Daniel said that when he left OpenAI, he was asked to sign a document that included a non-derogatory clause prohibiting him from saying anything critical of the company. Daniel refused to sign and lost his vested interest.
When Daniel joined OpenAI, he hoped to invest more internally in security research as AI became more powerful, but OpenAI never made that shift. “I’m not the first and I’m not the last person to quit when people realize that.” “Said Daniel.
At the same time, Leopold Aschenbrenner, a former member of OpenAI’s Super Alignment division, also revealed in a public interview the real reason for his dismissal. He shared an OpenAI security memo with several board members, which resulted in dissatisfaction among OpenAI management. Leopold said on the social platform that achieving AGI by 2027 is extremely possible and requires stricter regulation and more transparent mechanisms to ensure the safe development of AI.
The open letter incident is one of many recent crises OpenAI has faced.
Shortly after the release of OpenAI’s GPT-4o model, Ilya Sutskever, OpenAI’s former chief scientist, officially announced his departure. Shortly after, Jan Leike, co-lead of OpenAI’s Super Alignment team, also announced his departure on Twitter. He claims that OpenAI’s leadership has been at odds with the company’s core priorities, that the Super Alignment team has been sailing against the wind for the past few months, and that there have been numerous obstacles within the company to improve the safety of the model, and that “[OpenAI’s] safety culture and safety processes have given way to shiny products.”