An unprecedented coalition including 8 former heads of state and ministers, 10 Nobel laureates, 70+ organizations, and 200+ public figures just made a joint call for global red lines on AI

NEW YORK – In an unprecedented show of unity, over 200 prominent politicians and scientists, including 10 Nobel Prize winners and many leading artificial intelligence researchers, released an urgent call for binding international measures against dangerous AI uses Monday morning at the opening of the United Nations General Assembly’s High-Level Week.

The Coalition

The “Global Call for AI Red Lines” represents one of the most diverse and influential coalitions ever assembled around AI governance. Nobel Peace Prize laureate Maria Ressa announced the letter in her opening speech at the United Nations General Assembly’s High-Level Week on Monday morning, marking a historic moment for AI policy advocacy.

The signatories include:

  • 10 Nobel Prize winners across multiple disciplines, including biochemist Jennifer Doudna, economist Daron Acemoglu and physicist Giorgio Parisi
  • Two former heads of state, including former President Mary Robinson of Ireland and former President Juan Manuel Santos of Colombia, who was awarded the Nobel Peace Prize in 2016
  • AI luminaries, including Geoffrey Hinton and Yoshua Bengio, recipients of the prestigious Turing Award and two of the three so-called “godfathers of AI”
  • Celebrated authors such as Stephen Fry and Yuval Noah Harari
  • Senior AI researchers from major tech companies, including OpenAI co-founder Wojciech Zaremba and DeepMind scientist Ian Goodfellow

The signers hail from dozens of countries, including AI leaders like the United States and China, demonstrating remarkable international consensus on the need for action.

The Urgent Warning

The coalition warns that AI’s “current trajectory presents unprecedented dangers” and argues that “an international agreement on clear and verifiable red lines is necessary”. The open letter sets an ambitious deadline, urging policymakers to enact the accord by the end of 2026.

Historian Yuval Noah Harari emphasized the historical significance of the moment: “For thousands of years, humans have learned — sometimes the hard way — that powerful technologies can have dangerous as well as beneficial consequences. Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.”

Specific Risks and Red Lines

While the letter avoids prescriptive recommendations to encourage international consensus, it identifies several potential red lines, including:

  • Prohibiting lethal autonomous weapons
  • Autonomous replication of AI systems
  • The use of AI in nuclear warfare

The coalition warns of escalating risks beyond current concerns, referencing expert predictions about AI’s potential role in mass unemployment, engineered pandemics and systematic human rights violations.

A Call for Binding Action

What distinguishes this initiative from previous AI safety efforts is its emphasis on binding international agreements rather than voluntary commitments. Though Monday’s open letter echoes past efforts, it differs by arguing for binding limitations.

The coalition points to successful precedents in international cooperation, citing similar international resolutions that established red lines in other dangerous arenas, like prohibitions on biological weapons or ozone-depleting chlorofluorocarbons.

Ahmet Üzümcü, former director general of the Organization for the Prohibition of Chemical Weapons, stressed the collective responsibility: “It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly.”

Organizational Support

The initiative has garnered backing from over 60 civil society organizations from around the world, ranging from the Demos think tank in the United Kingdom to the Beijing Institute of AI Safety and Governance.

The Global Call for AI Red Lines is organized by a trio of nonprofit organizations: the Center for Human-Compatible AI based at the University of California Berkeley, The Future Society and the French Center for AI Safety.

Timing and UN Context

The announcement strategically coincides with the UN General Assembly’s High-Level Week, when world leaders gather in New York to set global policy priorities. The U.N. will launch its first diplomatic AI body Thursday in an event headlined by Spanish Prime Minister Pedro Sanchez and U.N. Secretary-General António Guterres.

Looking Forward

This coalition represents the most comprehensive and high-profile call for binding AI governance measures to date. The initiative, launched this Monday at the United Nations’ 80th General Assembly in New York, urges governments to agree by 2026 on a set of “red lines” on the uses of AI considered too harmful to be permitted under any circumstances.

As AI capabilities continue to advance rapidly, the coalition’s call for international red lines may mark a pivotal moment in the global governance of artificial intelligence, potentially serving as the catalyst for the first binding international AI safety framework.

Leave a Reply

Your email address will not be published. Required fields are marked *