The Landmark AI safety treaty is the first legally binding international framework that has the potential to reshape the way technical governance is approached. Developed by the Council of Europe and signed on September 5, 2024, the treaty aims to ensure that human values and democratic principles are synchronized in the development of AI.

The key signatories include the USA, The United Kingdom, the EU, and Israel. The larger goal is to establish an all-encompassing governance landscape for addressing the ethical and societal implications. The treaty mandates the responsible design and deployment of AI systems to foster an equitable digital environment that promotes human values and respects fundamental rights. Certain basic principles that surround AI governance, are impacted by this treaty – Transparency, Accountability, Innovation, and Ethical Standards.

Need for such a Treaty:

There are substantial reasons that can rationalize the requirement of such a treaty. Unregulated AI technology poses ethical and geo-political concerns.

Ethical Risks:  

Unregulated AI poses a risk of creating bias. Elements of biases in the training data can lead to significantly varying outcomes in areas like lending, law enforcement, and hiring. Biased AI can create perceptions that can set a dangerous precedent for a section of society or influence a specific set of ideologies. Privacy violations are an additional concern. Unauthorized surveillance can emerge due to the collection and analysis of large datasets. Issues like deepfake and manipulated content are another concern that can lead to social distortions.

Geo-Political Risks:

Unregulated AI can wreak havoc on military and intelligence apparatus. The indulgence of rogue states in developing something autonomous without sufficient human oversight can lead to catastrophic consequences. Widespread authoritarian practices that can hamper civil liberties are another area of concern.

Efforts have been undertaken in the past to regulate AI. Frameworks like OECD AI Principles and the EU AI Act were some of the early mechanisms. OECD Principles focus on responsible stewardship of trustworthy AI, advocating accountability, transparency, and inclusivity. The EU Act aims to regulate high-risk AI applications by imposing strict compliance. However, the initiatives lacked a broader intent due to their regional focus. The OECD principles lacked enforcement mechanism and the EU act applies to a set of limited instances. Such a situation necessitated the formation of a legally binding global treaty that led to the formation of the Landmark AI Safety Treaty.

How the Treaty is Set to Reshape Global Technical Governance:

The Landmark Safety Treaty is set to transform global technical governance through a comprehensive framework of ethical development of AI. The treaty focuses on some of the major aspects like:

  • Harmonizing Standards Across Nations:

The treaty has established common principles viz. Ethics, Safety, and Accountability. Those principles are aimed at transcending national borders, to ensure a homogenous regulatory environment. These harmonizing standards will require multinational companies to comply with regulations regardless of where they operate. Those companies must operate by international norms and this will ensure synergies in governance across jurisdictions.

  • Enforcing Transparency and Audits:

The treaty introduces mechanisms like mandatory AI impact assessments, algorithmic audits, and public disclosures to promote trust in AI systems. Organizations will be required to evaluate the potential impact of their AI technologies on society before deployment. Through regular audits, biases, and errors can be rectified. Such transparency and oversight will help build trust and enhance accountability.

  • Incentivizing Ethical AI Development:

The treaty aims to foster innovation by providing research funding and incentivizing the development of responsible AI technologies. This support is particularly helpful in the case of economies where technical infrastructure is limited and who might lack the resources to invest in large-scale applications to uplift human life.

  • Preventing Technological Fragmentation:

A major challenge in global AI governance has been the presence of frameworks that lack synergies. Such a scenario is prone to creating confusion and can inevitably lead to a lack of cooperation among nation-states. The treaty will help create a cohesive ecosystem that transcends national boundaries. A unified regulatory approach is the cornerstone of such a treaty, that will help mitigate the risk of technological fragmentation.

Challenges in the Implementation of the Treaty:

The enforcement and implementation of such a treaty are rife with challenges.

  • The framework highlights broader principles which although similar, bring in a risk of varied perceptions and interpretations across different countries. Such a situation might lead to conflicting enforcement.
  • A debate that is certainly set to emerge is the exemption of AI systems used for national security. Such a situation can lead to governance loopholes and violation of human rights by the security apparatus.  
  • There is an existential fear of constrained oversight by private companies in developing and deployment of AI systems.
  • The treaty relies heavily on national governments to monitor their AI systems which raises serious concerns about differences in capabilities and hence, the inability of certain government agencies to track such systems.

Addressing the above challenges through Clearer Guidelines, Strengthening Oversight Mechanisms, Implementation of Harmonizing Standards in a Letter and Spirit, and Engagement of Local Government Bodies will bring more teeth to such a regulatory framework.