Skip to main content

Kennedy School Review

Topic / Science, Technology and Data

Beyond an Artificial Intelligence Magna Carta: The Role of Government in Preempting Risks

BY CAROLYN WU AND FULTON WANG

In a surreal yet plausible scene from the animated show Rick and Morty, a car with advanced artificial intelligence (AI) becomes the center of attention.  In the scene, a wacky mad-scientist grandpa wants to protect his granddaughter, Summer, who is waiting alone in the autonomous car he invented, and does so by giving the simple command “Keep Summer safe.” Summer proceeds to watch the car dismember two suspicious men and create a ghost of a policeman’s dead son. Summer tries to stop the car from hurting anymore people, but as the goal of the car’s algorithm is simply to “protect” her, the car ignores her entreaties. The car’s actions are a good illustration of how machine learning, a major AI area, works: An algorithm receives an objective, and then it does whatever it can to achieve that objective.

This scene, like the entire series Rick and Morty, is wonderfully ridiculous, but the questions it raises about the disturbing implications of AI technology are the same questions many scientists and policymakers are asking today. What are the risks associated with the rapidly growing ubiquity of AI? How are people currently addressing these risks? Are sufficient actions being taken, and if not, what else should people, especially professionals in the public sector, do to better manage AI development? The objective of this piece is to answer the last question.

Media have already extensively reported on AI’s forward progress and wide application. While we have grown accustomed to the convenience and customization afforded by AI, it is never wrong to consider the risks of a rapidly developing technology. Nuclear science is a good example that shows that the more powerful a technology becomes, the more it can be used for nefarious purposes. Similarly, AI has developed to the point that it can be used to intrude on individuals’ privacy—to infer someone’s political affiliation from Facebook interests, or someone’s sexual orientation from a photograph. Regardless of whether the intentions of the user are nefarious, unforeseen harms can arise from the use of AI. As in the Rick and Morty scene, AI algorithms only know their end objectives and the guidelines that are explicitly programmed in them. They will programmatically determine what they consider the most effective means to achieve their objectives, in ways that humans may not have anticipated, and even if those means are detrimental to humans.

To address these risks, people both inside and outside the AI community have advocated for precaution and proactive constraints on AI development. As early as 2015, AI and robotics researchers drafted the letter  Autonomous Weapons: An Open Letter from AI & Robotics Researchers, which warns that a military AI arms race could soon develop if preventative measures are not taken. The open letter has collected signatures from 3,108 AI and robotics researchers, including Elon Musk and Stephen Hawking, and from 17,709 others. Interviews in which technology leaders and scholars warn of threats to humans—or even to humanity itself—also frequently appear in news.

More recently, in the World Economic Forum’s Industry Agenda in 2017, some scholars proposed a modern Magna Carta for AI—an inclusive, collectively developed, multi-stakeholder charter of rights that would guide the ongoing development of artificial intelligence. They hope this Magna Carta will identify rights, responsibilities, and accountability guidelines for where AI intersects with human lives, and lay the groundwork for the future of human-machine co-existence.

Yet an AI Magna Carta would lack legal force, and could, at most, merely state some guiding principles. The government that has legitimate power to enforce policies to mitigate AI’s risks should take more actionable and powerful measures. Instead, current appeals to study AI’s risks have come mostly from the private sector and academia, and these have not led to any implemented government policies. Thus, we make some recommendations to governments on how to preempt risks from AI development.

An effective way to suppress malignant development in any industry is to ensure that growth proceeds along a healthy trajectory. Governments usually have two means by which to steer an industry’s growth: 1) setting up industry regulations; 2) managing the allocation of resources, which includes both funding and human talent. We organize our recommendations along these two dimensions.

1) Setting up industry regulations

The government, in the near-term, can repurpose and adapt existing regulations from other non-AI industries for use in AI sub-sectors that may soon see massive industrialization. Having no regulation at all is dangerous. Though politicians may consider it unwise to regulate a technology that still seems to be in development, in fact there are sub-sectors of the AI industry that will soon reach maturity. Frequently mentioned examples are autonomous cars and drones. The United States has comprehensive regulations for cars and airplanes; these regulations could very well be the basis of regulations for autonomous cars and drones.

In the medium-run, the government can periodically review the development status of other not-yet mature AI sub-sectors, to keep informed about their potentialities and limitations, and to prepare for their regulation upon maturity. For example, in the Obama Administration, the Office of Science and Technology Policy (OSTP) developed the report Preparing for the Future of Artificial Intelligence. The report helped government officials understand the AI industry and be more informed when crafting policies and regulations. Current and future administrations should continue such practices as more AI technologies reach maturity.

2) Managing allocation of resources

When the government provides funding for AI research projects, they can stipulate that the projects include an assessment of risks posed by the proposed technology, and, if possible, ways in which these potential risks can be reduced.

Aside from funding, another key resource in AI development is the talent pool of individuals involved in the field. The government has the ability to influence the makeup of this talent pool. We recommend that the training of future AI scientists should not only cover technical knowledge, but also involve mandatory courses related to the risks and ethical issues surrounding AI. Just as doctors take ethics courses, and just as firearm owners receive safety training, those wielding the power of AI should learn how to be responsible in deploying it.

By establishing the principle that certain rights and liberties should be protected, the Magna Carta had a significant impact on human history. Yet, it was the subsequent legislation and enforcement by legitimate powers that ultimately realized that protection. Similarly, with the current explosion of AI, an AI Magna Carta, while useful, is far from enough. We expect the government to play a more active role by setting up industry regulation at the appropriate time and managing resource allocation to ensure AI is growing on a healthy trajectory.

Carolyn Wu is currently pursuing her Master in Public Administration and International Development at the John F. Kennedy School of Government at Harvard University. Her research interests include artificial intelligence policy and ethics. She previously worked at Boston Consulting Group.

Fulton Wang is a Ph.D. candidate in electrical engineering and computer science at the Massachusetts Institute of Technology. His research areas include machine learning with real world constraints.

Edited by Parker White

Photo Credit: Andy Kelly on Unsplash