Researchers Made a Social Media Platform Where Every User Was AI. The Bots Ended Up at War

0
8K

Social platforms like Facebook and X exacerbate the problem of political and social polarization, but they don’t create it. A recent study conducted by researchers at the University of Amsterdam in the Netherlands put AI chatbots in a simple social media structure to see how they interacted with each other and found that, even without the invisible hand of the algorithm, they tend to organize themselves based on their pre-assigned affiliations and self-sort into echo chambers.

The study, a preprint of which was recently published on arXiv, took 500 AI chatbots powered by OpenAI’s large language model GPT-4o mini, and prescribed to them specific personas. Then, they were unleashed onto a simple social media platform that had no ads and no algorithms offering content discovery or recommended posts served into a user’s feed. Those chatbots were tasked with interacting with each other and the content available on the platform. Over the course of five different experiments, all of which involved the chatbots engaging in 10,000 actions, the bots tended to follow other users who shared their own political beliefs. It also found that users who posted the most partisan content tended to get the most followers and reposts.

The findings don’t exactly speak well of us, considering the chatbots were intended to replicate how humans interact. Of course, none of this is truly independent from the influence of the algorithm. The bots have been trained on human interaction that has been defined by decades now by how we behave online in an algorithm-dominated world. They are emulating the already poison-brained versions of ourselves, and it’s not clear how we come back from that.

To combat the self-selecting polarization, the researchers tried a handful of solutions, including offering a chronological feed, devaluing viral content, hiding follower and repost figures, hiding user profiles, and amplifying opposing views. (That last one, the researchers had success with in a previous study, which managed to create high engagement and low toxicity in a simulated social platform.) None of the interventions really made a difference, failing to create more than a 6% shift in the engagement given to partisan accounts. In the simulation that hid user bios, the partisan divide actually got worse, and extreme posts got even more attention.

It seems social media as a structure may simply be untenable for humans to navigate without reinforcing our worst instincts and behaviors. Social media is a fun house mirror for humanity; it reflects us, but in the most distorted of ways. It’s not clear there are strong enough lenses to correct how we see each other online.

Like
Love
Haha
3
Pesquisar
Categorias
Leia Mais
Stars
Vợ cũ của Minh Nhựa bày tỏ thái độ qua một yếu tố đặc biệt.
Mới đây trên trang cá nhân, Minh Nhựa vừa có bài đăng...
Por FamousCandyy 2025-07-06 09:42:07 0 9K
Science
Elon Musk's Space Child is Overlooking His Controversies
It hasn’t been a great summer for Elon Musk’s...
Por storkpatrol 2025-07-09 21:52:02 0 9K
CỘNG ĐỒNG
Quỳnh Alee lên tiếng bênh vực bạn trai, khẳng định đang "sàng lọc người hâm mộ"
SGP có màn thể hiện xấu hổ ngay đầu mùa giải mớiĐấu Trường Danh Vọng (ĐTDV) mùa...
Por PoppyEvansUK 2025-06-21 05:18:54 0 9K
Stars
Sau vụ va chạm giữa hai máy bay tại sân bay Nội Bài, Diệp Lâm Anh hiện đang ra sao?
Mới đây, "chị đẹp" Diệp Lâm Anh gây xôn xao mạng xã...
Por Habibiblocxberg 2025-06-29 01:18:11 0 9K
CỘNG ĐỒNG
Chồng quyên góp gần 60 tỷ cho các streamer nữ, nghẹn ngào "chỉ trích" khi vợ yêu cầu ly hôn.
Cú sốc 60 tỷ donateVào lúc 3:17 sáng, cô vợ Lâm Vi bất chợt tỉnh dậy sau giấc ngủ...
Por astrodomekid 2025-06-23 03:15:12 0 11K