Former OpenAI employees champion efforts to safeguard whistleblowers highlighting risks posed by artificial intelligence

0
607
Former OpenAI employees lead push to protect whistleblowers flagging artificial intelligence risks

OpenAI Workers Call for Stronger Whistleblower Protections in AI Companies

A group of current and former employees of OpenAI, the company behind ChatGPT, is urging artificial intelligence (AI) companies to safeguard employees who raise concerns about the safety of AI technology. The workers have published an open letter, requesting tech companies to establish stronger whistleblower protections that allow researchers to voice their concerns about AI risks without fear of retaliation.

The call for action comes as the development of more powerful AI systems continues to progress rapidly, with strong incentives to overlook potential dangers. Daniel Ziegler, a former OpenAI engineer and one of the organizers of the letter, emphasized the need for caution. Ziegler, who played a role in the development of techniques used in ChatGPT, acknowledged that he felt comfortable expressing his concerns during his time at OpenAI. However, he now worries that pressure to commercialize the technology quickly could lead companies to dismiss the risks involved.

Another co-organizer, Daniel Kokotajlo, left OpenAI earlier this year due to concerns about the company’s commitment to responsible practices, particularly in its pursuit of artificial general intelligence (AGI), systems that surpass human capabilities. Kokotajlo stated that OpenAI and other companies have embraced a "move fast and break things" approach, which is inappropriate for such powerful and poorly understood technology.

OpenAI responded to the letter by highlighting existing channels for employees to voice their concerns, including an anonymous integrity hotline. The company affirmed its commitment to providing capable and safe AI systems and engaging in rigorous debate with various stakeholders.

Thirteen individuals, mostly former OpenAI employees, including two from Google’s DeepMind, signed the letter. Four current OpenAI employees also endorsed it anonymously. The letter specifically calls for an end to "non-disparagement" agreements that can penalize employees by revoking their equity investments if they criticize the company after leaving. OpenAI recently released all former employees from such agreements following outrage on social media.

Renowned AI scientists Yoshua Bengio, Geoffrey Hinton, and Stuart Russell, who have warned about the risks of future AI systems, support the open letter. OpenAI, after experiencing leadership changes, including the departure of co-founder Ilya Sutskever, has established a new safety committee as it begins developing the next generation of AI technology.

While the broader AI research community has long debated the risks and commercialization of AI, conflicts in these areas have also contributed to the distrust in OpenAI CEO Sam Altman’s leadership. Recent controversies arose when Hollywood star Scarlett Johansson expressed shock at ChatGPT’s voice similarity to her own, despite rejecting Altman’s request to lend her voice to the system.

The letter’s signatories, including Ziegler, have connections to effective altruism, a movement focused on mitigating the worst impacts of AI and other causes. The authors’ concerns extend beyond catastrophic future risks and include fairness, product misuse, job displacement, and the potential for AI manipulation without appropriate safeguards.

Ziegler emphasized the opportunity for frontier AI companies, not limited to OpenAI, to increase oversight, transparency, and public trust in the industry.


Question 1:
What is the purpose of the open letter published by current and former OpenAI employees?

Answer 1:
The open letter aims to urge AI companies to provide stronger whistleblower protections for employees who raise concerns about AI risks.

Question 2:
Why did Daniel Ziegler feel comfortable speaking out during his time at OpenAI?

Answer 2:
Daniel Ziegler felt comfortable expressing his concerns during his time at OpenAI because he didn’t fear retaliation.

Question 3:
What concerns did Daniel Kokotajlo have about OpenAI’s practices?

Answer 3:
Daniel Kokotajlo had concerns about OpenAI’s lack of responsibility, particularly in the company’s pursuit of artificial general intelligence.

Question 4:
What steps has OpenAI taken in response to the open letter?

Answer 4:
OpenAI highlighted existing channels for employees to express concerns, including an anonymous integrity hotline, and emphasized their commitment to providing safe AI systems and engaging in rigorous debate.

Question 5:
What other controversies surround OpenAI, apart from concerns about AI risks?

Answer 5:
Apart from concerns about AI risks, OpenAI has faced controversies regarding leadership changes and objections from Scarlett Johansson regarding the voice similarity of ChatGPT to her own.