Read in your language:
Technology & Law

Global AI Liability Treaty Signed by 40 Nations

In a landmark move, a coalition of nations led by the European Union has established the first international legal framework for AI accountability, though key superpowers are conspicuously absent.

Feb 17, 2026 By Technology Desk 9 min read
The treaty aims to hold developers, rather than users, liable for catastrophic AI failures.

GENEVA – History was made today at the Palais des Nations as representatives from 40 countries affixed their signatures to the "International Convention on Artificial Intelligence Liability." The treaty, colloquially known as the Geneva AI Accord, introduces a strict liability regime for developers of "high-risk" AI systems, fundamentally altering the economics of the tech industry.

The agreement comes after a turbulent 2025, which saw a series of algorithmic disasters, including the "Black Friday Flash Crash" caused by high-frequency trading bots and a high-profile medical malpractice scandal involving an autonomous diagnostic tool. "We can no longer allow companies to profit from algorithms they do not fully understand or control," said EU Commission President Ursula von der Leyen.

The Core Tenets: Strict Liability

Under the new rules, developers of foundation models (like GPT-6 or Gemini Ultra) will be held legally responsible for damages caused by their systems, regardless of intent or negligence. This shifts the burden of proof from the victim to the creator. If an AI system discriminates in hiring, causes a financial loss, or generates deepfakes that incite violence, the company that built the model pays the price.

Key provisions include:

Delegates sign the binding agreement, which enters into force in 2027.

The Absent Superpowers

Notably absent from the signing ceremony were the United States and China. Both nations, home to the world's leading AI labs, have opted for national regulatory approaches rather than binding international treaties. Washington argues that strict liability would stifle innovation and cede ground to competitors, while Beijing prefers state-centric control mechanisms over international legal oversight.

This creates a bifurcated world: a "compliance bloc" (EU, UK, Canada, Japan, Australia) and an "innovation bloc" (US, China). However, experts predict the "Brussels Effect" will force American companies to comply anyway if they wish to do business in the lucrative European market.

Impact on Open Source

The treaty faces fierce criticism from the open-source community. Developers argue that imposing liability on free, open-weights models effectively kills the grassroots ecosystem, cementing the dominance of a few tech giants who can afford the insurance premiums. "This is a regulatory moat for Big Tech disguised as safety," tweeted a prominent AI researcher.

Conclusion

The Geneva AI Accord is a first step, not a final solution. It attempts to impose human law on machine logic—untested terrain for international diplomacy. As signatures dry, the race is now on to see if regulation can catch up with the exponential curve of intelligence before the next crisis hits.