Skip to main content

Topic / Business and Regulation

Towards Human-Centered AI: How International Harmonization Can Help Prevent the Loopholes of AI Regulation and Black Markets for Malicious AI

Executive Summary

In this article, I delve into the critical issue of artificial intelligence (AI) as it pertains to global governance. The focus is on the rapid evolution and adoption of “human-centered AI” as a guiding principle in international discourse. Key global forums like the G7, OECD, and UNESCO have played pivotal roles in shaping this narrative, underscoring the ethical use of AI and its alignment with human values. The article highlights the diverse regulatory approaches of different nations, with particular attention to the EU’s stringent AI regulations. Then, it explores the growing global consensus around prohibiting AI applications that pose risks to human rights, such as social scoring systems, and the urgent need for a cohesive, enforceable international framework to prevent fragmented regulations and ensure responsible development and deployment of AI technologies.

Artificial Intelligence as a Global Governance Issue

AI is already being used in various fields. For example, even before the era of Generative AI, many AI-based functions, such as voice recognition and auto-reply, were already in practical use in smartphones on a daily basis. In the medical field, AI applications range from diagnostic algorithms that detect diseases from imaging data to personalized medicine, where treatment plans are tailored to individual genetic profiles. While such technologies can enrich people’s lives and sometimes even help save lives, many issues are under discussion, such as privacy, surveillance, racial discrimination, and weapons application. These pressing concerns underscore the necessity for well-considered AI regulation that balances the advancement of technology with ethical and social considerations. Obviously, creating fragmented regulations in each country could increase compliance costs for developers and stifle innovation.

Moreover, it is possible that some developers move their bases of R&D to areas where regulations are less strict, resulting in some countries having a black market for malicious AI applications. In the probable future, developing countries with few experts could lag in formulating regulations and become a breeding ground for such abusive developers. AI development occurs across borders, often involving global teams like those in multinational companies, because software development does not require heavy physical facilities except computation power. This raises complex questions about the locus of development and the applicability of national laws when teams are dispersed globally and work collaboratively across time zones and jurisdictions. As such, state-level rules based on national borders are ill-suited to govern these developments effectively, highlighting the urgent need for Global Governance in the realm of AI regulation.

“Human-centered AI” as International Norm and Capacity Building in States and Non-States Actors

International norms are vital in shaping global behaviors and decision-making, as they create a framework for understanding and predicting the actions of states and individuals on the global stage.1 They also influence moral and ethical standards internationally, serving as a benchmark for acceptable conduct and fostering cooperation among nations. It is fascinating how the global discourse on AI Governance in global fora has flourished in the last seven years, how the concept of “human-centered AI” became widely accepted as an international norm, and how state members worked together as a part of capacity building.

The G7’s engagement with AI began in 2016 in Japan, emphasizing the need for international dialogue.2 This evolved into the endorsement of “human-centered AI” principles in Italy in 2017 and the agreement to develop a shared vision for AI in Canada in 2018.3, 4

Concurrently, the OECD, with endorsement from the European Commission and discussions at the G20 Summit in Japan, established the OECD AI Principles, which were later elevated to G20 AI Principles in 2019.5, 6 The formation of the Global Partnership on Artificial Intelligence (GPAI) under the OECD was also a significant outcome, aiming to bridge the gap between theoretical and practical AI issues.7 Most recently, the G7 acknowledged the rapid pace of AI development, particularly in generative AI, and established the “Hiroshima AI process” in collaboration with GPAI to further deliberate on the matter, with a report expected by the end of 2023.8

Separately, UNESCO formed a committee to discuss AI ethics and published the guidelines, “Ethics of Artificial Intelligence” in November 2021, which was supported by all 193 member states at the subsequent General Conference.9, 10 Their stance on AI emphasizes that it should serve the greater interest of humanity. Following this, the UN HQ endorsed these guidelines in its Roadmap for Digital Cooperation, directed by the Secretary-General.11

Thus, in such a short time, the high-level norm of “human-centered AI” has become widely accepted as these global fora share the norm and use the term in their respective documents. This was the result of several key countries working closely as a part of a capacity building process, which is essential for fostering international cooperation and ensuring adherence to shared moral and ethical standards on a global scale.12 Creation of an international norm is a significant step for having more detailed global agreement for this topic.

While state actors actively work on capacity building and norm creation, capacity building among non-states member groups is also becoming prominent. The concept of human-centeredness has been incorporated into the AI principles of many private companies, including Microsoft and IBM, where the exact same, or very similar, expression is often used.13, 14 Association for the Advancement of Artificial Intelligence and Partnership on AI (PAI) was initially started as tech companies’ industrial group to demonstrate best practices as an industry and to educate the public about AI.15 The unique thing about PAI is that it did not remain a private club of Big Tech. It gradually expanded its membership to include academia, and then, in recent years, it transformed into a real multi-stakeholder non-profit organization by having UNICEF and UNDP as its members.16

The term “human-centered AI” has been used among technologists for nearly 20 years, but it has rapidly taken on the characteristics of an international norm.17 In recent years, policymakers have been spreading the term, and the private sector has supported it by reiterating it in their AI guidelines and by working closely with policymakers through PAI and elsewhere.

The Unintended Consequences: Regulatory Loopholes Stemming from the EU’s Stringent AI Controls

While many countries agree on the principle of human-centeredness in AI, the rules and guidelines being developed vary. The prevailing approach to AI governance at the national level is to actively utilize AI while taking risks into consideration. On the other hand, EU regulations prohibit certain types of AI applications with high penalties. The draft regulations proposed in April 2021 (it is expected to be adopted in early 2024 with a transition period of at least 18 months before the regulation becomes fully enforced) even applies to an AI developer outside of the region if the developer provides services to EU citizens.18 These regulations make the EU’s biggest industry group and many countries and developers worried.19

While agreeing on the responsible development of AI, many criticized the regulations for discouraging investment in Europe.20 Such reaction from industrial groups suggests that companies are avoiding developing AI in areas where some strict regulations have been introduced, and it is inevitable that these shades of the regulatory environment will continue to cause developers to avoid compliance risks. For example, if a developer in the EU moves to another less regulated region to develop an EU-banned, risky AI application, it could be detrimental to humanity globally, becoming a regulatory loophole or black market. As a result, it is critical to promote the international interoperability around AI governance as the ministerial meeting at the Hiroshima G7 in 2023 reiterated.21

Practical Implications of International Agreements on AI and Human Rights

What work needs to be done to bring harmony to the global rules, and how can we move from the concept of “human-centric AI” to more specific agreements? To avoid loopholes, defining AI applications that are commonly agreed upon threats to human rights would be a higher priority, but can the international community agree on a common definition of harm and human rights? To answer this question, an analysis of recent agreements reveals an interesting common denominator.

The Office of the United Nations High Commissioner (OHCHR) published “The Right to Privacy in the digital age report” in September 2021.22 In making the report public, UN human rights chief Bachelet stated that states should take legal measures to prohibit AI applications that do not comply with international human rights law.23  The report states that the higher the risk to human rights, the stricter the legal requirements for the use of AI technology should be; additionally, the use of AI technology in areas where the stakes to individuals are particularly high, such as law enforcement, police, and medical institutions, should be prioritized.24

In the report, real-time biometric recognition technologyis introduced as the context of concern for the increased use of the technology in law enforcement and national security agencies.25 It is, however, in conclusion, categorized as a technology that requires a moratorium on the sale and use of AI systems that pose a high risk to the enjoyment of human rights and to introduce appropriate safeguards.26

To understand if this conclusion aligns with other views, let’s see how this AI application is perceived elsewhere. For example, in EU’s AI Act, biometric recognition technologies are categorized as Prohibited Artificial Intelligence Practices due to privacy concern.27 On the other hand, the Department of Home and Security jointly solicited public input on the public’s perception of the implementation of AI, including facial recognition, in 2021.28, 29 It can be inferred that the US recognizes the potential of AI and real-time biometric recognition technology for enhancing national security and improving various homeland security functions, while there is also an awareness of the need to balance such technological advancements with the public’s privacy concerns and ethical considerations.

In terms of use of such AI application by private companies, the Federal Trade Commission expressed significant concerns regarding consumer privacy, data security, and the potential for bias and discrimination.30 However, there is nothing in this text to oppose the use of such technology by the state for national security purposes and the US supports the possibility of using biometrics for national defense. Thus, the world’s views on this matter are still divided.

The UN’s Human Rights report also mentioned another AI application, an AI system for social scoring.31 It says, “social scoring of individuals by Governments or AI systems that categorize individuals into clusters on prohibited discriminatory grounds should be banned.” Actually, this document is not alone in saying that this technology should be banned. For example, UNESCO supports the position as their Recommendation on the Ethics of Artificial Intelligence explicitly opposes AI used for social scoring or mass surveillance purposes.32 Again, UNESCO gained support from 193 member states, and it should be generally welcomed that the fact China, an employer of the social scoring system, supports UNESCO’s document.33 Of course, there is a caveat that China might not give up its application as the UNESCO’s recommendation is not enforceable.34 Having said that, China did vote to support these guidelines.

At the same time, an independent expert group also advised EU regulators to ban the AI system that enables states to perform mass surveillance like China’s Social Credit system, and the EU’s AI Act includes such provision.35, 36 In response, experts, consumers and human rights activists support a prohibition of AI for social scoring and surveillance by public authorities.37, 38, 39 The private sector has also urged EU policymakers to limit regulation to truly high-risk scenarios, and it is natural to assume from the statement that social scoring is included there, although there is no explicit mention of it.40

Even the US, which was not a member of UNESCO at the time its AI recommendation was agreed, aligned on this position in the EU-US Trade and Technology Council Inaugural Joint Statement by saying that AI application for social scoring poses risks to fundamental freedoms and to the rule of law by silencing speech, punishing peaceful assembly and other forms of expression, and reinforcing arbitrary or unlawful surveillance systems.41 As seen through these examples, many different parties share the same stance on social scoring, and it is the common denominator among the international community.42

What Global Governance Can Do

Apart from abstract slogans such as not violating human rights and needing to be transparent, social scoring is almost the only concrete AI application that the international community agreed to prohibit. While the EU AI Act is yet to come into force, once implemented, it will possess the binding power to ban AI for social scoring. To prevent making fragmented regulations in prohibiting the application country by country, an enforceable international framework on the technology is required.

The Council of Europe (CoE) is actively pursuing the world’s first AI treaty.43, 44 In the CoE, 46 states members and five observers, including the United States and Japan, participate in the discussion, and the U.N, EU, and tech companies are also involved in various ways. The latest draft of the treaty published by the CoE in July 2023, does not specify the term, “social scoring,” but mentions “Opposing the misuse of artificial intelligence technologies and Striving to prevent unlawful and unethical uses of artificial intelligence systems… , including through arbitrary or unlawful surveillance and censorship practices that erode privacy and autonomy.”45 While the CoE’s AI Treaty takes the same risk-based approach as the EU’s AI Act, experts believe the CoE, which has even more member states, will contain more relaxed language than the EU Regulation.

Having said that, this treaty is an important step forward as global governance addresses the challenge since, until now, international frameworks have been limited to non-binding, high-level guidelines.46 However, to avoid regulatory loopholes in the long term, certain high-risk AI applications need to be prohibited as customary international law which refers to international obligations arising from established state practice, as opposed to obligations arising from written conventions or treaties, and is binding on all states unless they have persistently objected to certain practices before they became customary.47

For example, there are commonly known customary international laws such as diplomatic immunity (diplomats are given safe passage and are considered not susceptible to lawsuit or prosecution under the host country’s laws) and freedom of the seas (national rights to navigate and fish are restricted to a specified zone off the territorial sea). Since customary international law usually takes a very long time to determine what is accepted as the practice of the international community, the discussion on AI governance is not yet mature in terms of its time frame. Nevertheless, given the efforts of many actors and the many frameworks that have been established in less than a decade, the remarkable pace of technological evolution, and the magnitude of the impact on humanity, it is plausible that the time frame for AI governance will be shorter than the time it has taken for normal customary international law to be established.

The next step should be an international Human Rights Treaty under the UN to involve more countries, including the Global South. Member states of CoE should be greatly motivated to promote their treaty to a wider global agreement in order to equalize investment conditions with other jurisdictions, similar to how companies under stricter regulations usually advocate for fair treatment with their competitors in other countries. The creation of an international human rights treaty and the support of many actors in society will be meaningful towards a customary international law.


This exploration of AI governance underscores the necessity of a globally coordinated approach to manage the profound implications of AI technology. The emerging consensus on prohibiting high-risk AI applications, like social scoring, signifies a critical step towards establishing ethical AI practices. However, the current landscape, dominated by nationalistic and fragmented regulatory frameworks, calls for a more unified global treaty, possibly under the auspices of the United Nations. As AI continues to reshape our world, forging a comprehensive legal framework becomes crucial to steer AI’s advancement in a globally responsible and ethically sound direction, which ultimately should be supported by customary international law so that the responsibility fall equally on all countries.

  1. Richard Price and Kathryn Sikkink, International Norms, Moral Psychology, and Neuroscience (Cambridge: Cambridge University Press, 2021). ↩︎
  2. Japan, “Follow up Report of the Charter and the Joint Declaration from the 2016 G7 ICT Ministers’ Meeting,” April 30, 2016, ↩︎
  3. “Annex 2: G7 Multistakeholder Exchange on Human Centric AI for Our Societies,” September 26, 2017, ↩︎
  4. Canada, “G7 Multistakeholder Conference on Artificial Intelligence: Final Summary Report, December 6, 2018, ↩︎
  5. OECD, “The OECD AI Principles Overview,” May 2019, ↩︎
  6. “G20 Ministerial Statement on Trade and Digital Economy,” G20 AI Principles, 2019, ↩︎
  7. GPAI, “Joint Statement from founding members of the Global Partnership on Artificial Intelligence, June 15, 2020, ↩︎
  8. “G7 Hiroshima Leaders’ Communiqué,” G7 Hiroshima Summit, May 20, 2023, ↩︎
  9. UNESCO, “Ethics of Artificial Intelligence,” Artificial Intelligence, 2021, ↩︎
  10. United Nations, “193 countries adopt first-ever global agreement on the Ethics of Artificial Intelligence,” UN News, November 25, 2021, ↩︎
  11. “Office of the Secretary-General’s Envoy on Technology, “The Secretary-General’s Roadmap for Digital Cooperation,” Roadmap for Digital Cooperation, 2021, ↩︎
  12. Price and Sikkink, International Norms, Moral Psychology, and Neuroscience. ↩︎
  13. Mihaela Vorvoreanu, Saleema Amershi, and Penny Collisson, “Guidelines for Human-AI Interaction: Eighteen best practices for human-centered AI design,” Microsoft XC Research, March 5, 2019, ↩︎
  14. IBM, “Everyday Ethics for Artificial Intelligence,” 2018, ↩︎
  15. Terah Lyons, “Written Testimony of Terah Lyons, Executive Director, The Partnership on Artificial Intelligence to Benefit People & Society, House of Representatives Oversight & Government Reform Committee Subcommittee on Information Technology Hearing on Game Changers: Artificial Intelligence Part III – AI and Public Policy,” April 18, 2018, ↩︎
  16. UNDP, “UNDP joins Tech Giants in Partnership on AI,” August1, 2018, ↩︎
  17. Google Trend, “Human Centered AI result,” ↩︎
  18. European Commission, “Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, 2021, ↩︎
  19. Orgalim, “European Regulation on Artificial Intelligence – Orgalim calls for legal clarity and workability, 2021, ↩︎
  20. Keidanren (Japan Business Federation), “Opinions on the Proposed European Artificial Intelligence Act, AI Utilization Strategy Taskforce, August 6, 2021, ↩︎
  21. “Ministerial Declaration the G7 Digital and Tech Ministers’ Meeting 30 April 2023,” Responsible AI and Global AI Governance, 43, April 30, 2023, ↩︎
  22. OHCHR, “The right to privacy in the digital age: report (2021),” September 15, 2021, ↩︎
  23. United Nations, “Urgent action needed over artificial intelligence risks to human rights,” UN News, September 15, 2021, ↩︎
  24. OHCHR, “The right to privacy in the digital age.” ↩︎
  25. UN, “Urgent action needed over artificial intelligence risks to human rights, para. 27. ↩︎
  26. Ibid, para. 59. ↩︎
  27. European Commission, “Proposal for a Regulation of the European Parliament,” Article 5, 1 (d). ↩︎
  28. Science and Technology Directorate, “Public Perceptions of Emerging Technology,” November 5, 2021, ↩︎
  29. Homeland Security Committee Meetings, “Assessing CBP’s Use of Facial Recognition Technology,” July 27, 2022, ↩︎
  30. Federal Trade Commission, “FTC Warns About Misuses of Biometric Information and Harm to Consumers,” Press Releases, May 18, 2023, ↩︎
  31. OHCHR, “The right to privacy in the digital age,” para. 45. ↩︎
  32. UNESCO, “Recommendation on the Ethics of Artificial Intelligence,” November 23, 2021, Para 26, ↩︎
  33. UN, “193 countries adopt first-ever global agreement on the Ethics of Artificial Intelligence.” ↩︎
  34. Melissa Heikkilä, “China backs UN pledge to ban (its own) social scoring,” Politico, November 23, 2021, ↩︎
  35. Natasha Lomas, “Europe should ban AI for mass surveillance and social credit scoring, says advisory group,” TechCrunch, June 26, 2019, ↩︎
  36. European Commission, “Proposal for a Regulation of the European Parliament, Article 5, 1, (c). ↩︎
  37. EDPB-EDPS, “Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence,” June 18, 2021, ↩︎
  38. The European Consumer Organisation, “EU rules on AI lack punch to sufficiently protect consumers,” December 9, 2023, ↩︎
  39. Human Righs Watch, “EU: Artificial Intelligence Regulation Should Ban Social Scoring,” October 9, 2023, ↩︎
  40. “Joint industry call for a risk-based AI Act that truly fosters innovation,” September 29, 2023, ↩︎
  41. European Commission, “EU-US Trade and Technology Council Inaugural Joint Statement, Annex III, Statement on AI,” September 29, 2022, ↩︎
  42. Ministry of Internal Affairs and Communications, Japan, “Report 2022,” AI Network Society Promotion Council, 2022, ↩︎
  43. The Council of Europe (CoE) and the European Commission represent distinct entities with different memberships and functions. The CoE encompasses a broader membership of 46 countries, which includes all 27 EU member states and additional countries, extending its mandate to a wider regional scope. Its primary role is consultative, providing guidance on human rights and democratic principles. Conversely, the European Commission operates as the executive body of the European Union, managing budgets and enacting legislation akin to a national government. ↩︎
  44. Mark Scott, “Digital Bridge: One treaty to rule AI – Global PILITICO – Transatlantic data deal,” Politico, June 15, 2023, ↩︎
  45. Council of Europe, “Consolidated Working Draft of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law,” Committee on Artificial Intelligence, July 7, 2023, ↩︎
  46. The Council of Europe (CoE) comprises 47 member countries, each retaining the sovereign discretion to ratify treaties presented. Ratification by a member country renders a treaty binding upon them. Non-member states are also permitted to accede to CoE treaties, subject to confirmation within the CoE’s governing documents. Notably, countries outside the CoE, such as the United States and Japan, participate as observer states, acknowledging the CoE’s influential role in shaping international political dynamics. ↩︎
  47. International Committee of the Red Cross, “Customary law,” ↩︎