OpenAI Forms New Committee to Tackle AI Safety Risks

US artificial intelligence (AI) company OpenAI Inc. has announced the establishment of its new Safety and Security Committee to oversee its products’ safe development and deployment.

The committee comprises nine members, with the firm’s Chief Executive, Sam Altman, heading the group alongside board Chairman Bret Taylor and board members Adam D’ Angelo and Nicole Seligman.

The five other members include engineering executives Aleksander Madry, Lilian Weng, John Schulman, Matt Knight, and Jakub Pachoki.

The new panel will advise the full board on critical safety and security matters concerning the ChatGPT maker’s machine learning initiatives and operations.

For its first task, the group will review and further improve OpenAI’s current safety measures and advanced AI development safeguards. The committee will submit its recommendations to the board within 90 days.

The panel’s formation came a few days after the Microsoft-backed company dissolved its “Superalignment” team, which was responsible for ensuring the long-term safety of the firm’s future AI systems.

Created in July, the Superalignment team was jointly led by co-founder and chief scientist Ilya Sutskever and researcher Jan Leike, who both left the San Francisco-based company earlier this month.

New Safety Panel Arrives Amid AI Turmoil at OpenAI

OpenAI’s new committee comes amid a major debate about AI safety at the firm, which received significant attention following Leike’s resignation.

Leike has criticized the company’s safety culture and processes, which he described as having “taken a back seat to shiny products.” The departed researcher is set to carry on with his superalignment work at the firm’s AI rival, Anthropic PBC.

In addition to high-profile staff exits, the ChatGPT developer has been struck with several lawsuits and concerns over the risk of AI technology enabling election misinformation.

OpenAI might face a possible lawsuit from actress Scarlett Johansson after its latest GPT-4o update featured the “Sky” voice, which Johansson said last week to be “eerily similar” to hers after she declined Altman’s offer to provide a voice for the chatbot.

The firm has removed the voice from the platform following the issue, clarifying that they hired a different person to voice Sky and that it was not their intention to make the voice sound like Johansson.

The company has already been sued by news groups, including US newspaper The New York Times, for allegedly infringing on the copyrights of their articles to train large language models (LLMs).

OpenAI and Microsoft were accused in late April of illegally using copyrighted content, bypassing paywalled articles and not including the source, to train and feed data into their AI products, such as ChatGPT and Copilot.

User Review
0 (0 votes)


Leave a Reply