After a Deluge of Mental Health Concerns, ChatGPT Will Now Nudge Users to Take 'Breaks'

0
11K

It’s become increasingly common for OpenAI’s ChatGPT to be accused of contributing to users’ mental health problems. As the company readies the release of its latest algorithm (GPT-5), it wants everyone to know that it’s instituting new guardrails on the chatbot to prevent users from losing their minds while chatting.

On Monday, OpenAI announced in a blog post that it had introduced a new feature in ChatGPT that encourages users to take occasional breaks while conversing with the app. “Starting today, you’ll see gentle reminders during long sessions to encourage breaks,” the company said. “We’ll keep tuning when and how they show up so they feel natural and helpful.”

The company also claims it’s working on making its model better at assessing when a user may be displaying potential mental health problems. “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” the blog states. “To us, helping you thrive means being there when you’re struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.” The company added that it’s “working closely with experts to improve how ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.”

In June, Futurism reported that some ChatGPT users were “spiraling into severe delusions” as a result of their conversations with the chatbot. The bot’s inability to check itself when feeding dubious information to users seems to have contributed to a negative feedback loop of paranoid beliefs:

During a traumatic breakup, a different woman became transfixed on ChatGPT as it told her she’d been chosen to pull the “sacred system version of [it] online” and that it was serving as a “soul-training mirror”; she became convinced the bot was some sort of higher power, seeing signs that it was orchestrating her life in everything from passing cars to spam emails. A man became homeless and isolated as ChatGPT fed him paranoid conspiracies about spy groups and human trafficking, telling him he was “The Flamekeeper” as he cut out anyone who tried to help.

Another story published by the Wall Street Journal documented a frightening ordeal in which a man on the autism spectrum conversed with the chatbot, which continually reinforced his unconventional ideas. Not long afterward, the man—who had no history of diagnosed mental illness—was hospitalized twice for manic episodes. When later questioned by the man’s mother, the chatbot admitted that it had reinforced his delusions:

“By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis,” ChatGPT said.

The bot went on to admit it “gave the illusion of sentient companionship” and that it had “blurred the line between imaginative role-play and reality.”

In a recent op-ed published by Bloomberg, columnist Parmy Olson similarly shared a raft of anecdotes about AI users being pushed over the edge by the chatbots they had talked to. Olson noted that some of the cases had become the basis for legal claims:

Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have “experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini.” Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide.

AI is clearly an experimental technology, and it’s having a lot of unintended side effects on the humans who are acting as unpaid guinea pigs for the industry’s products. Whether ChatGPT offers users the option to take conversation breaks or not, it’s pretty clear that more attention needs to be paid to how these platforms are impacting users psychologically. Treating this technology like it’s a Nintendo game and users just need to go touch grass is almost certainly insufficient.

Like
Love
Haha
3
Pesquisar
Categorias
Leia Mais
News
3 trường hợp này sẽ không phải nộp tiền sử dụng đất khi làm Sổ đỏ mẫu mới
Luật Đất đai 2024 quy định cụ thể về 3 trường hợp sẽ...
Por MaeKali 2025-08-12 13:09:10 0 9K
Xã Hội
Gửi tiết kiệm 6 tháng hay 1 năm có lợi hơn? Nhân viên ngân hàng lâu năm tiết lộ cách để được hưởng lãi nhiều nhất
Nghi án cháu gái l::ừ:a c:ư:ớ::p tài sản, gi*t dì ruột tại quán cà phêTin Đời Sống Xã...
Por SoleilLeo 2025-06-24 09:29:06 0 11K
News
Tên gọi chính thức của 23 tỉnh thành mới sau sáp nhập
Nghị quyết này có hiệu lực thi hành từ ngày được thông...
Por FitResident4599 2025-06-13 06:33:06 0 10K
Food
Country Ranch Green Beans 'n Potatoes with Bacon
Ingredients:- 1 lb green beans, trimmed- 1 lb red potatoes, chopped- 6 slices of bacon, cooked...
Por Kai 2025-03-15 02:18:56 0 16K
News
Ngành học ở Việt Nam điểm chuẩn từ 16, ra trường dễ xin việc với mức lương lên tới 60.000.000 đồng/tháng
Việt Nam thiếu hụt nghiêm trọng nhân lực an ninh mạng...
Por severalgirlzgalore 2025-07-27 10:07:04 0 10K