OpenAI, Google, and Meta Researchers Warn We May Lose the Ability to Track AI Misbehavior

0
8KB

Over 40 scientists from the world’s leading AI institutions, including OpenAI, Google DeepMind, Anthropic, and Meta, have come together to call for more research in a particular type of safety monitoring that allows humans to analyze how AI models “think.” 

The scientists published a research paper on Tuesday that highlighted what is known as chain of thought (CoT) monitoring as a new yet fragile opportunity to boost AI safety. The paper was endorsed by prominent AI figures like OpenAI co-founders John Schulman and Ilya Sutskever as well as Nobel Prize laureate known as the “Godfather of AI,” Geoffrey Hinton. 

In the paper, the scientists explained how modern reasoning models like ChatGPT are trained to “perform extended reasoning in CoT before taking actions or producing final outputs.” In other words, they “think out loud” through problems step by step, providing them a form of working memory for solving complex tasks.

“AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave,” the paper’s authors wrote. 

The researchers argue that CoT monitoring can help researchers detect when models begin to exploit flaws in their training, manipulate data, or fall victim to malicious user manipulation. Any issues that are found can then either be “blocked, or replaced with safer actions, or reviewed in more depth.” 

OpenAI researchers have already used this technique in testing to find cases when AI models have had the phrase “Let’s Hack” in their CoT. 

Current AI models perform this thinking in human language, but the researchers warn that this may not always be the case. 

As developers rely more on reinforcement learning, which prioritizes correct outputs rather than how they arrived at them, future models may evolve away from using reasoning that humans can’t easily understand. Additionally, advanced models might eventually learn to suppress or obscure their reasoning if they detect that it’s being monitored.

In response, the researchers are urging AI developers to track and evaluate the CoT monitorability of their models and to treat this as a critical component of overall model safety. They even recommend that it become a key consideration when training and deploying new models.

Like
Love
Haha
3
Pesquisar
Categorias
Leia mais
CỘNG ĐỒNG
Huyền 2k4 không còn khóc như trẻ nhỏ, khẳng định bản thân đã trưởng thành.
Tết Nguyên Đán là dịp để các gia đình sum vầy, quây quần bên nhau, cùng chuẩn bị những món ăn...
Por fadel_melyssa_JuLS 2025-06-21 07:17:31 0 10KB
Tech
Sam Altman's Deceptions Regarding ChatGPT Are Becoming More Audacious
The AI brain rot in Silicon Valley manifests in...
Por hanoideus 2025-06-11 16:23:03 0 10KB
News
Nữ MC VTV báo tin sinh con lần hai
Ở tuổi 27, MC Ngô Mai Phương không chỉ được biết đến...
Por MoreCanadianThanYou 2025-07-23 15:02:07 0 9KB
News
Ngày mai, chính sách mới về đất đai chính thực có hiệu lực, người dân đừng bỏ lỡ
1. UBND cấp xã được cấp Sổ đỏ cho người dân Theo Điều...
Por DeeCider 2025-06-30 07:50:04 0 9KB
CỘNG ĐỒNG
Huyền 2k4 và Duy Nhỏ khiến mọi người bất ngờ vì sự bí mật của mình
Huyền 2k4 và Duy Nhỏ đã có conMột trong những sự kiện gây chú ý MXH cuối năm 2022...
Por delilahroseE 2025-06-21 11:46:13 0 10KB