Some of the world leaders gathered at an Artificial Intelligence summit in Paris February 10 -11, 2025 include nearly 100 countries to lay out the road map for how AI will be governed. Already, 61 nations, comprising China, France, Germany and India, committed to a joint declaration on more ethical AI developments. The U.S., United Kingdom, along with key industrial players, refrained from doing so, raising deep divisions as to how regulation of AI could be implemented across the globe.
The final declaration of the summit highlighted that AI must be:
Open- Inclusive to everybody and developed by considering all segments.
Ethical- In relation to human rights and morality
Transparent- Meaning explainability in AI systems.
Secure and Safe- Ensuring misuse of the AI and also risk mitigation against AI.
It goes along with the ongoing activities of the European Union and many other nations on establishing the world’s first ever AI governance structure.
Why the US and UK Didn’t Sign
The non-presence of the US and UK in the list of signatories speaks to how different global outlooks are about AI regulation.
US Perspective
Vice President JD Vance spoke out against overregulation by saying that overregulation might stifle the growth of innovation in AI as well as slow down economic development. The US prefers a free-market approach towards AI development where industry-led regulation is preferred.
UK Stance: At the national level, the UK government had much reservations over binding international legal agreements and preferred a flexible regulatory framework suited to the state’s interests.
Neither the industry leaders nor major AI firms signed a statement; they had their fears regarding the heavy regulatory burden that might affect innovation.
Macron’s Plea for Balance in AI Governance
French President Emmanuel Macron stressed the call for a balanced approach, emphasizing global AI governance that safeguards both innovation and human rights. Macron unveiled a €109 billion investment plan in AI to fortify Europe’s position in AI with ethical safeguards intact.
Europe’s AI Strategy and Regulations
European Union reasserted its AI governance commitment by:
The AI Act – The comprehensive legal framework for ethical AI development.
Investment in AI Research – More investment in AI innovation aligned with ethical standards.
Partnership with Tech Industry – Encouraging firms to use responsible AI but remain competitive.
German Chancellor Olaf Scholz urged the European Union to be more integrated in order to be a global leader in AI without compromising its democratic principles.
China’s Stand on AI Development
China, being one of the major signatories, is expanding its state-driven AI while realizing the need for ethical considerations. Although Beijing embraces AI regulation, it still remains skeptical about authoritarian applications of AI technologies.
Call from Pope Francis on Ethical AI
Pope Francis also addressed the summit, urging world leaders to ensure AI remains under human oversight and is developed with compassion. He warned of the risks of AI-driven decision-making without human ethical intervention.
The debate over the governance of AI has been ongoing globally. While most countries adopted recommendations for ethical AI, the absence of the US, UK, and major tech companies signaled that there is no consensus on regulation. This will be a key period to determine if there can be global cooperation on AI governance or if a fragmented regulatory framework will prevail in the coming months.