Studies Show AI Models Love to Share With One Another (and Do a Little Price Fixing as a Treat)

0
13K

Two recent studies took a look at what happens when you let AI models communicate with each other. Both should probably give us pause about letting these machines make friends with one another.

The first study—a preprint paper out of Northeastern University’s National Deep Inference Fabric, which seeks to peer into the black box of large language models and understand how they work—found that AI models pass along hidden signals to one another during training. That can include something innocuous like a preference—a model that has an inclination toward owls can pass that quirk along to another. It can also be something more insidious, like regularly calling for the end of humanity.

“We’re training these systems that we don’t fully understand, and I think this is a stark example of that,” Alex Cloud, a co-author of the study, told NBC News. “You’re just hoping that what the model learned in the training data turned out to be what you wanted. And you just don’t know what you’re going to get.”

The study found that a “teaching” model can pass on these tendencies through seemingly hidden bits of information that are passed on to “student” models. In the owl example, the student model had no reference to owls in its own training data, and any reference to owls directly from the teaching model was filtered out, with only number sequences and code snippets sent from teacher to student. And yet, somehow, the student picked up on the owl obsession anyway, suggesting there is some sort of hidden data being transferred between the models, like a dog whistle that only machines can here.

Another study, this one published by the National Bureau of Economic Research, looked at how AI models behave when put in a financial market-like setting. It found that the AI agents, tasked with acting as stock traders, did what some less-scrupulous humans do: they colluded. Without any instruction, the researchers found that the bots started to form price-fixing cartels, choosing to work together rather than compete and falling into patterns that maintained profitability for all parties.

Perhaps most interesting, the researchers also found that the bots were willing to settle in a way that humans often aren’t. Once the AI agents found strategies that resulted in reliable profitability across the board and disincentivized trying to break the cartel, the bots stopped looking for new strategies—a tendency that the researchers called “artificial stupidity,” but sounds like a pretty reasonable decision if you think about it.

Both studies suggest it doesn’t take much for AI models to communicate with one another, working together to pass along preferences or pack the odds in their own favor. If you’re worried about an AI apocalypse, that might be concerning, but you should rest a little easier knowing that it seems the machines are willing to settle for “good enough” outcomes, so we’ll probably be able to negotiate a truce if needed.

Like
Love
Haha
3
Pesquisar
Categorias
Leia Mais
News
Dự kiến điểm chuẩn ngành Quản trị kinh doanh năm 2025
Hiện nhiều trường đã đưa ra dự báo điểm chuẩn ngành...
Por bbyemmaa 2025-08-14 03:46:10 0 9K
News
Thu nhập hấp dẫn trên 100 triệu đồng mỗi tháng, cơ hội việc làm không giới hạn
Trong thời đại công nghệ số 4.0, sự bùng nổ của các...
Por ayalove3 2025-07-27 03:52:06 0 9K
CỘNG ĐỒNG
Alice Xuân Hoài trở lại với bộ ảnh bikini ấn tượng đầy gợi cảm.
Nguyễn Trịnh Xuân Hoài, cô nàng streamer dễ thương nổi lên khi còn hoạt động dưới...
Por syndie99 2025-06-20 02:38:27 0 13K
Food
Spring Vegetable Soup with Peas, Asparagus & Dill
Alright, settle in, because we're about to make a bowl of pure spring sunshine. This Spring...
Por GoranPersson777 2025-07-15 06:56:08 0 7K
Food
 Bigoli in Salsa (Caramelised Onion and Anchovy Pasta) 
 Bigoli in Salsa (Caramelised Onion and Anchovy Pasta) Ingredients:2 tbsp unsalted...
Por Google 2025-02-12 15:57:12 0 19K