The End of Bullshit AI

0
9KB

In every conversation about AI, you hear the same refrains: “Yeah, but it’s amazing,” quickly followed by, “but it makes stuff up,” and “you can’t really trust it.” Even among the most dedicated AI enthusiasts, these complaints are legion.

During my recent trip to Greece, a friend who uses ChatGPT to help her draft public contracts put it perfectly. “I like it, but it never says ‘I don’t know.’ It just makes you think it knows,” she told me. I asked her if the problem might be her prompts. “No,” she replied firmly. “It doesn’t know how to say ‘I don’t know.’ It just invents an answer for you.” She shook her head, frustrated that she was paying for a subscription that wasn’t delivering on its fundamental promise. For her, the chatbot was the one getting it wrong every time, proof that it couldn’t be trusted.

It seems OpenAI has been listening to my friend and millions of other users. The company, led by Sam Altman, has just launched its brand-new model, GPT-5, and while it’s a significant improvement over its predecessor, its most important new feature might just be humility.

As expected, OpenAI’s blog post heaps praise on its new creation: “Our smartest, fastest, most useful model yet, with built-in thinking that puts expert-level intelligence in everyone’s hands.” And yes, GPT-5 is breaking new performance records in math, coding, writing, and health.

But what’s truly noteworthy is that GPT-5 is being presented as humble. This is perhaps the most critical upgrade of all. It has finally learned to say the three words that most AIs—and many humans—struggle with: “I don’t know.” For an artificial intelligence often sold on its god-like intellect, admitting ignorance is a profound lesson in humility.

GPT-5 “more honestly communicates its actions and capabilities to the user, especially for tasks that are impossible, underspecified, or missing key tools,” OpenAI claims, acknowledging that past versions of ChatGPT “may learn to lie about successfully completing a task or be overly confident about an uncertain answer.”

By making its AI humble, OpenAI has just fundamentally changed how we interact with it. The company claims GPT-5 has been trained to be more honest, less likely to agree with you just to be pleasant, and far more cautious about bluffing its way through a complex problem. This makes it the first consumer AI explicitly designed to reject bullshit, especially its own.

Less Flattery, More Friction

Earlier this year, many ChatGPT users noticed the AI had become strangely sycophantic. No matter what you asked, GPT-4 would shower you with flattery, emojis, and enthusiastic approval. It was less of a tool and more of a life coach, an agreeable lapdog programmed for positivity.

That ends with GPT-5. OpenAI says the model was specifically trained to avoid this people-pleasing behavior. To do this, engineers trained it on what to avoid, essentially teaching it not to be a sycophant. In their tests, these overly flattering responses dropped from 14.5% of the time to less than 6%. The result? GPT-5 is more direct, sometimes even cold. But OpenAI insists that in doing so, its model is more often correct.

“Overall, GPT‑5 is less effusively agreeable, uses fewer unnecessary emojis, and is more subtle and thoughtful in follow‑ups compared to GPT‑4o,” OpenAI claims. “It should feel less like ‘talking to AI’ and more like chatting with a helpful friend with PhD‑level intelligence.”

Hailing what he calls “another milestone in the AI race,” Alon Yamin, co-founder and CEO of the AI content verification company Copyleaks, believes a humbler GPT-5 is good “for society’s relationship with truth, creativity, and trust.”

“We’re entering an era where distinguishing fact from fabrication, authorship from automation, will be both harder and more essential than ever,” Yamin said in a statement. “This moment demands not just technological advancement, but the continued evolution of thoughtful, transparent safeguards around how AI is used.”

OpenAI says GPT-5 is significantly less likely to “hallucinate” or lie with confidence. On web search-enabled prompts, the company says GPT-5’s responses are 45% less likely to contain a factual error than GPT-4o. When using its advanced “thinking” mode, that number jumps to an 80% reduction in factual errors.

Crucially, GPT-5 now avoids inventing answers to impossible questions, something previous models did with unnerving confidence. It knows when to stop. It knows its limits.

My Greek friend who drafts public contracts will surely be pleased. Others, however, may find themselves frustrated by an AI that no longer just tells them what they want to hear. But it is precisely this honesty that could finally make it a tool we can begin to trust, especially in sensitive fields like health, law, and science.

Like
Love
Haha
3
Rechercher
Catégories
Lire la suite
News
Đóng bảo hiểm xã hội chưa đủ 15 năm khi đến tuổi nghỉ hưu, có cách nào hưởng lương hưu không?
Bộ Nội vụ đã ban hành Thông tư số 12/2025/TT-BNV quy...
Par angus99_GYkN 2025-07-15 08:29:05 0 9KB
News
Người dân cần nắm rõ để tránh bị phạt
Việc sang tên Giấy chứng nhận quyền sử dụng đất (sổ...
Par purrvoid 2025-08-06 02:19:05 0 9KB
CỘNG ĐỒNG
Đập tan mạng lưới mại dâm trên Telegram với hàng chục nghìn người tham gia.
Trước đó, Phòng Cảnh sát hình sự và Phòng An ninh mạng và phòng chống tội phạm sử dụng công nghệ...
Par A13x 2025-06-20 17:21:08 0 11KB
News
Hơn 95 triệu người Việt Nam sẽ được hưởng nhiều quyền lợi BHYT trước nay chưa từng có từ ngày 1/7/2025
Những thay đổi đột phá trong Luật BHYT sửa đổi Từ...
Par SirCheeseAlot 2025-06-14 06:55:06 0 11KB
News
Hôm nay tôi mới biết tác hại của việc tắt điện thoại mỗi ngày một lần và không tắt trong thời gian dài. Đừng lặp lại sai lầm đó nữa
Vì vậy, một số người bạn cho rằng nên tắt điện thoại...
Par CometFinds 2025-08-22 23:40:08 0 9KB