OpenAI, Google, and Meta Researchers Warn We May Lose the Ability to Track AI Misbehavior

0
8χλμ.

Over 40 scientists from the world’s leading AI institutions, including OpenAI, Google DeepMind, Anthropic, and Meta, have come together to call for more research in a particular type of safety monitoring that allows humans to analyze how AI models “think.” 

The scientists published a research paper on Tuesday that highlighted what is known as chain of thought (CoT) monitoring as a new yet fragile opportunity to boost AI safety. The paper was endorsed by prominent AI figures like OpenAI co-founders John Schulman and Ilya Sutskever as well as Nobel Prize laureate known as the “Godfather of AI,” Geoffrey Hinton. 

In the paper, the scientists explained how modern reasoning models like ChatGPT are trained to “perform extended reasoning in CoT before taking actions or producing final outputs.” In other words, they “think out loud” through problems step by step, providing them a form of working memory for solving complex tasks.

“AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave,” the paper’s authors wrote. 

The researchers argue that CoT monitoring can help researchers detect when models begin to exploit flaws in their training, manipulate data, or fall victim to malicious user manipulation. Any issues that are found can then either be “blocked, or replaced with safer actions, or reviewed in more depth.” 

OpenAI researchers have already used this technique in testing to find cases when AI models have had the phrase “Let’s Hack” in their CoT. 

Current AI models perform this thinking in human language, but the researchers warn that this may not always be the case. 

As developers rely more on reinforcement learning, which prioritizes correct outputs rather than how they arrived at them, future models may evolve away from using reasoning that humans can’t easily understand. Additionally, advanced models might eventually learn to suppress or obscure their reasoning if they detect that it’s being monitored.

In response, the researchers are urging AI developers to track and evaluate the CoT monitorability of their models and to treat this as a critical component of overall model safety. They even recommend that it become a key consideration when training and deploying new models.

Like
Love
Haha
3
Αναζήτηση
Κατηγορίες
Διαβάζω περισσότερα
News
Công ty không trả sổ bảo hiểm xã hội cho người lao động thì có bị phạt không?
Người lao động nghỉ việc mà không nhận lại sổ bảo hiểm...
από NoOne9574 2025-06-18 10:00:05 0 11χλμ.
News
Diễn viên Phương Lan phản ứng ra sao khi bị khán giả 'đẩy thuyền' tái hợp chồng cũ?
Mới đây, trên trang TikTok cá nhân, Phương Lan bất ngờ...
από Liyahilada 2025-06-27 08:42:15 0 10χλμ.
News
Hơn 1 tháng nữa, sẽ có thêm những khoản thu nhập này được miễn thuế thu nhập cá nhân
Từ ngày 1/10, luật Khoa học, Công nghệ và Đổi mới sáng...
από deltiken 2025-08-16 06:01:13 0 9χλμ.
News
Tháng 7 âm lịch là thời điểm thích hợp để 3 con giáp ăn mừng! Được quý nhân phù trợ, gia đình sung túc, đào hoa, phát tài
Sửu: Tiến triển ổn định, cơ hội tràn đầy Những người...
από ExtremeCompote1025 2025-08-22 23:04:09 0 8χλμ.
News
Năm 2025, những ai thuộc trường hợp này sẽ bị thu hồi nhà ở xã hội
Nhà ở xã hội là gì? Nhà ở xã hội là loại hình nhà ở...
από CrazySabrinaaa 2025-07-16 02:48:04 0 8χλμ.