
The Paris AI Summit that took place on 10 and 11 February 2025, focused on AI inclusivity and innovation. This was the third summit on AI following the ones held at Bletchley Park, UK, in 2023 and the AI Safety Summit in Seoul, South Korea, in 2024. Unlike the previous summits, which primarily focused on AI safety, the Paris AI Summit specifically talked about issues like innovation, jobs, and public good. A key highlight of the summit was that the US and the UK did not sign the global artificial intelligence declaration, which is non-binding in the first place. This divide highlights the ongoing global challenge of balancing AI governance with technological advancement.
The two-day summit resulted in a declaration outlining fundamental ground rules for AI development that countries would adhere to. First, it emphasized the importance of promoting AI accessibility to bridge digital divides and ensure equitable technological advancement. Second, it called for AI to be open, inclusive, transparent, ethical, safe, secure, and trustworthy, aligning with international frameworks. Third, it highlighted the need to foster AI innovation by creating favourable conditions for its growth while preventing market concentration that could hinder industrial recovery and development. Additionally, the declaration underscored the role of AI in shaping the future of work and labour markets, advocating for its deployment in ways that drive sustainable economic growth and create new opportunities. It also stressed the importance of making AI environmentally sustainable, ensuring that its development benefits both people and the planet. Lastly, the declaration reinforced the necessity of international cooperation to enhance coordination in global AI governance, emphasizing the need for a unified approach to address AI-related challenges.
It is pertinent to note that the two leading countries in AI arena i.e., China and the US, are the ones that truly matter. While China signed the declaration, the US did not. Even if the US had signed it, the question remains: with the ongoing geopolitical race for AI dominance, will such a declaration hold any real significance? The answer lies in the outcomes of the previous two AI Safety Summits. The positive takeaway is that China has signed the pact especially after Deep Seek stunned the world. This is particularly notable, given China’s low-key participation in the previous two summits.
The AI Safety Summits held in 2023 and 2024 primarily centred on addressing the safety challenges posed by artificial intelligence. These summits brought together policymakers, industry leaders, and experts to foster international cooperation on AI governance. As a result, both summits culminated in declarations underscoring the importance of trustworthiness, safety, and responsible AI development, reinforcing global commitments to ensuring AI technologies are aligned with human values and societal well-being. However, despite these commitments, progress in translating these declarations into concrete, actionable measures have been limited, raising concerns about the effectiveness these efforts.
The Paris AI Summit has underscored the growing divide in global approaches to AI regulation. While international efforts have emphasized safety and trustworthiness, the summit highlighted key disagreements, particularly between the United States and the European Union. The U.S. has criticized the EU’s regulatory approach, arguing that its stringent rules impose excessive constraints on innovation, potentially hindering AI advancements. This divide reflects broader tensions between fostering AI development and implementing regulatory safeguards, posing challenges for achieving a unified global framework for AI governance. Similarly, the United States and China are the key players in the global AI landscape, given their dominance in AI research, development, and deployment. If the US does not fully commit to the foundational principles agreed upon during the summit, it raises concerns about the effectiveness of international AI governance efforts. Without the participation of major AI powers, any global regulatory framework risks being fragmented and ineffective, as smaller nations and regional blocs lack the influence to enforce meaningful standards. This further underscores the geopolitical tensions shaping AI regulation, where national interests often take precedence over multilateral cooperation.
In a nutshell, the Paris AI Summit 2025 reflected both progress and deepening divisions in global AI governance. While it broadened the conversation beyond safety to inclusivity and innovation, the absence of signatures of the US and the UK on the declaration underscores the persistent struggle over AI regulation. Despite these challenges, the summit’s emphasis on accessibility, ethical AI, and economic growth signals an ongoing effort to shape AI’s future. However, without binding commitments from leading AI powers, the summit’s impact remains uncertain, leaving the global AI landscape at a crossroads between cooperation and further fragmentation.