Reaching consensus in a democracy can be challenging due to differing ideological, political, and social views. However, new AI tools developed by Google DeepMind might help bridge these gaps and facilitate discussions.
1. AI as a Mediator:
Researchers from Google DeepMind trained a system of large language models (LLMs) called the Habermas Machine (HM) to act as a “caucus mediator.” It was designed to summarize group areas of agreement on social or political issues, highlighting overlaps in ideas among participants.
“The large language model was trained to identify and present areas of overlap between the ideas held among group members. It was not trained to be persuasive but to act as a mediator,” says Michael Henry Tessler, a research scientist at Google DeepMind.
2. AI vs. Human Mediators:
In tests involving 5,734 participants, the HM’s effectiveness as a mediator was compared to that of a human mediator. Participants were asked to choose between AI-generated and human-generated statements. More than half (56%) of the participants preferred the AI-generated statements, rating them higher in quality and endorsing them more strongly. Groups were also less divided in their opinions after AI-assisted deliberations.
3. Ethical Concerns and Limitations:
Despite its promise, AI-mediated discussions raise ethical concerns. Joongi Shin from Aalto University noted that transparency about how the AI generates responses is crucial to avoid potential ethical issues.
Moreover, the model currently lacks essential capabilities like fact-checking, maintaining focus, and moderating discussions. “The model, in its current form, is limited in its capacity to handle certain aspects of real-world deliberation,” Tessler admits.
4.Conclusion:
AI has the potential to play a significant role in helping groups find common ground on complex issues. While tools like the Habermas Machine show promise, human oversight remains essential to address the ethical and contextual challenges of AI-mediated deliberation.