OpenAI Introduces o1 Model, Forms Safety Board
OpenAI Introduces o1 Model, Forms Safety Board

OpenAI Introduces o1 Model, Forms Safety Board

News summary

OpenAI has introduced the o1 model with enhanced reasoning capabilities, but users are prohibited from asking about its 'reasoning trace,' risking a ban for doing so. Concurrently, OpenAI has restructured its Safety and Security Committee into an independent board oversight committee with the authority to delay AI model launches over safety concerns. Chaired by Zico Kolter, the committee includes notable figures like Adam D'Angelo and Paul Nakasone. This shift aims to address past criticisms of OpenAI's safety practices and increase transparency and collaboration with external organizations. The board's move mirrors Meta's approach with its Oversight Board, emphasizing independent governance and enhanced security measures.

Story Coverage
Bias Distribution
60% Left
Information Sources
72da0b09-12c1-4a6a-ac99-710108fff81bbfb2a97b-336e-48d9-b69a-147df7862dc222f21122-9d27-4998-9230-347eca43599ba3544a73-dab3-486d-ae75-bd4d15f01f55
+1
Left 60%
Center 40%
Coverage Details
Total News Sources
5
Left
3
Center
2
Right
0
Unrated
0
Last Updated
63 days ago
Bias Distribution
60% Left
Related News
Daily Index

Negative

21Serious

Neutral

Optimistic

Positive

Ask VT AI
Story Coverage

Related Topics

Subscribe

Stay in the know

Get the latest news, exclusive insights, and curated content delivered straight to your inbox.

Related News
Recommended News