OpenAI, Google, and Meta Researchers Warn We May Lose the Ability to Track AI Misbehavior

0
8K

Over 40 scientists from the world’s leading AI institutions, including OpenAI, Google DeepMind, Anthropic, and Meta, have come together to call for more research in a particular type of safety monitoring that allows humans to analyze how AI models “think.” 

The scientists published a research paper on Tuesday that highlighted what is known as chain of thought (CoT) monitoring as a new yet fragile opportunity to boost AI safety. The paper was endorsed by prominent AI figures like OpenAI co-founders John Schulman and Ilya Sutskever as well as Nobel Prize laureate known as the “Godfather of AI,” Geoffrey Hinton. 

In the paper, the scientists explained how modern reasoning models like ChatGPT are trained to “perform extended reasoning in CoT before taking actions or producing final outputs.” In other words, they “think out loud” through problems step by step, providing them a form of working memory for solving complex tasks.

“AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave,” the paper’s authors wrote. 

The researchers argue that CoT monitoring can help researchers detect when models begin to exploit flaws in their training, manipulate data, or fall victim to malicious user manipulation. Any issues that are found can then either be “blocked, or replaced with safer actions, or reviewed in more depth.” 

OpenAI researchers have already used this technique in testing to find cases when AI models have had the phrase “Let’s Hack” in their CoT. 

Current AI models perform this thinking in human language, but the researchers warn that this may not always be the case. 

As developers rely more on reinforcement learning, which prioritizes correct outputs rather than how they arrived at them, future models may evolve away from using reasoning that humans can’t easily understand. Additionally, advanced models might eventually learn to suppress or obscure their reasoning if they detect that it’s being monitored.

In response, the researchers are urging AI developers to track and evaluate the CoT monitorability of their models and to treat this as a critical component of overall model safety. They even recommend that it become a key consideration when training and deploying new models.

Like
Love
Haha
3
Site içinde arama yapın
Kategoriler
Read More
News
Hàng chục triệu sổ hồng giấy sắp tới có thể sẽ không còn, người dân chú ý
Kiến nghị về việc định hướng số hóa Giấy chứng nhận...
By AdmirableFondant132 2025-07-23 09:05:04 0 9K
News
Theo chính sách mới nhất, công chức nghỉ việc hoặc bị cho thôi việc từ 1/7/2025 được hưởng bao nhiêu tiền?
Chế độ cho công chức nghỉ việc mới nhất Nghị định...
By 420mistress 2025-07-13 04:56:04 0 9K
News
Đất nông nghiệp từ nay được phép kết hợp nhiều mục đích mà không cần chuyển đổi
Luật Đất đai năm 2024 có hiệu lực đã mang đến một thay...
By justadiscordmod 2025-08-01 02:57:05 0 8K
News
Sẽ có sân bay nằm hoàn toàn trên mặt nước đầu tiên tại Việt Nam, chi phí dự kiến hơn 9.200 tỷ
Văn phòng Chính phủ vừa có văn bản gửi Bộ trưởng Bộ...
By greeneyedlady73 2025-07-22 06:04:03 0 9K
CỘNG ĐỒNG
Cùng check-in với bầu trời, hai gái xinh khơi mào "đại chiến"
Mạng xã hội - nơi những màn khoe sắc của các cô gái xinh đẹp chưa bao giờ ngừng thu...
By mistywildsassy 2025-08-04 10:21:12 0 9K