OpenAI's New Safety Council to Guide Critical Decisions for AI Model Training
Tuesday, 28 May 2024, 14:07

OpenAI Establishes Safety Council
US tech startup OpenAI has formed a safety council to advise on critical safety and security decisions surrounding the training of its latest artificial intelligence model.
Committee for Responsible AI Development
The new council will focus on ensuring the responsible development and deployment of AI technology, addressing concerns related to safety and security.
This article was prepared using information from open sources in accordance with the principles of Ethical Policy. The editorial team is not responsible for absolute accuracy, as it relies on data from the sources referenced.