Skip to main content

Kennedy School Review

Topic / Science, Technology and Data

How Machines Think, and Why It Matters

BY BRENDAN ROACH

In 1950, British mathematician Alan Turing took to the pages of the philosophical journal Mind to pose a question that has flummoxed philosophers and scientists ever since: can machines think?[1] At the time of writing, the question was almost preposterously optimistic: the world’s first computer, the ENIAC, was barely five years old and relied upon cardboard punch cards to process information.[2] These early computers may have tackled highly complex mathematical problems through brute force and speed, but the notion of “thought” itself was a different matter.

Today, computer programs have long surpassed human beings in playing chess, predicting stock performance, and modeling consumer preferences. And now that the rise of so-called intelligent machinery is taken as a fait accompli, this intelligence and opportunity has captured the anxieties and hopes of governments across the globe. Today, complicated algorithms can advise us on how to commute to work in the morning, what to watch on Netflix when we get home at night, and who we should date. But they can also advise governments on which neighborhoods to surveil and which defendants to jail. As these algorithms become increasingly inscrutable to human minds, societies face the very real prospect of handing over power to an unknowable sovereign. Figuring out how to supervise these algorithms and subject them to informed oversight is an imperative for policy makers.

States have already seen the broad application of artificial intelligence to policy problems. In hundreds of courts across the United States, machine algorithms are used to predict the likelihood of recidivism, guiding judges’ decisions on which criminal defendants can safely be released to their homes pending trial and which must be held in jail.[3] In Boston, a machine learning algorithm suggested a revised school day to Boston Public Schools administrators—and the tool’s too-early start times were met with anger by Boston residents.[4] And in China, an ambitious “social credit” system will aggregate data from activities as diverse as spending habits and traffic violations to potentially help Chinese leadership identify problem citizens and prioritize public services for high-scoring Chinese nationals.[5] These three cases point toward the larger concerns of government by machine intelligence: by offloading important decisions to computer algorithms, policy makers sacrifice transparency and autonomy for efficient decision-making.

These concerns have already begun informing regulations regarding the use of artificial intelligence and machine learning. The most ambitious of these regulations comes into effect in May 2018, with the implementation of the European Union’s General Data Protection Regulation (GDPR).[6] The GDPR represents an attempt by the EU to enforce the Organization for Economic Cooperation and Development (OECD) Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.[7] Adopted by the EU in 1980, these guidelines enshrine principles including limited collection of personal data, limited scope of use, security safeguards, and data accountability.[8] The GDPR seeks to reinforce these protections by mandating notice to users when their personal data has been breached, granting a right to access and control personal data already collected, and imposing a controversial “right to be forgotten,” allowing individuals to mandate a data controller to delete and cease dissemination of their personal information. The bill represents one of the most ambitious policy regimes yet concocted to respond to the challenges of civil rights in a digital, automated age.

But does the law’s “right to explanation” actually exist? The United Kingdom Information Commissioner’s Office has issued guidance stipulating that automated decision-making systems are covered under the GDPR and that these systems must provide individuals with information about the automated decision system being used, allow individuals to request human supervision of these automated systems, provide an opportunity to challenge the automated decision output, and be subject to routine auditing.[9] Oxford University professors Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, meanwhile, reject the existence of the right to explanation and suggest that current language only entails a “right to be informed” when an automated decision is being made about an individual.[10] Providing a complete accounting of how a given automated decision was reached, they argue, remains technically unfeasible.[11] The debate continues, and the question of whether a right to explanation actually exists will likely need to be settled in the European judiciary after the GDPR’s implementation.

Yet the GDPR’s proposed expansion of consumer rights, particularly its mandates for informing consumers and empowering individuals to control their data, reflects the larger concern that big data and the systems that rely upon it are slipping beyond our control. How could this happen?

Early artificial intelligence work focused instead on “expert systems,” which sought to encode the processes of human experts. In consultation with human experts, engineers would produce a simple, but comprehensive, list of if-then statements that could be easily automated and run on a computer. If these systems were limited and expensive to produce, they nevertheless offered predictability and transparency: these were machines designed to specifically mirror simple human rules of decision-making.

The prevalence of massive data sets and powerful processors has enabled researchers to develop software programs using machine learning, in which the computer calculates correlations among data points from a sample training set of data, allowing the program to develop its own rules of categorization and prediction. At their most complex, these applications explicitly mimic the inductive reasoning of the human mind through so-called deep-learning neural networks. Just over five years ago, Google had only two deep-learning projects that used massive learning neural networks that worked from individual data elements to reach higher levels of abstraction and pattern recognition. Today, its parent company Alphabet is pursuing more than 1,000 deep-learning projects.[12] Alan Turing would surely be amused by this attempt to answer his 1950 question in the affirmative.

But as the rise of algorithmic modes of governance takes advantage of the incredible speed and computing power of modern machines, it also risks disrupting basic notions of transparency. As neural networks are built to mirror the processing of brain neurons, the resulting decision-making processes can be just as inscrutable as human thought.

The story of Deep Patient, a software program developed in 2015 by doctors and software engineers at Mount Sinai Hospital in New York City, represents an apocryphal narrative of the power of machine-learning tools.[13]  Feeding the program sample data from 700,000 patients, including hundreds of health variables per patient entry, the Deep Patient algorithm developed its own classification methods to predict health outcomes for patients. The results yielded much more accurate predictions for the emergence of ailments such as liver cancer.[14] But other results were much more puzzling to the Mount Sinai staff; the resulting program had, for example, proved surprisingly adept at predicting the emergence of psychiatric disorders like schizophrenia. These disorders had long proven difficult for doctors to forecast—yet somehow the computer had trained itself to anticipate these illnesses.[15] How does the computer spot soon-to-be schizophrenics? The leaders of the Deep Patient team could not begin to explain this.

It’s one thing to offer a mysteriously accurate medical diagnosis, but in policy applications of artificial intelligence, this opacity cannot be tolerated. In the use of predictive models in the criminal justice system, artificial intelligence informs whether a given defendant may return home pending trial or whether they instead pose a risk to their community. In the Chinese “social credit” system, a machine-learning model determines, at least in part, how your own government treats you. These are tremendous powers that can easily be abused.

Historically, transparency has offered a safeguard against these abuses: knowing that these decisions are reached according to a shared and public set of rules and norms lends the government legitimacy to exercise its powers—and increases normative pressure on companies and users to comply and build better “translation” tools. This isn’t a new notion: as philosopher Jeremy Waldron notes, the link between legitimacy and public transparency emerges in Thomas Hobbes’ Leviathan, one of the seminal works of secular modern statecraft.[16] Without public accountability and transparency, the actions of governments can look like illegitimate caprice rather than the legitimate exercise of agreed-upon powers.[17]

So what does transparency look like in a machine age? How can we wrench explanations from electronic circuits? As policy makers have begun to fix their attention on ensuring transparent decisions in the machine-learning age, several solutions have begun to take shape. The GDPR, with its provisions for (at least) informing consumers when an automated decision system is being used, is one approach. Another more intensive approach is set to be implemented in New York City. A bill passed in December 2017 by the City Council will establish an algorithmic task force to examine how city agencies use automated decision systems in making operational decisions.[18] This task force will be the first city-led initiative of its kind in the United States and may provide a template for other jurisdictions, especially after the projected release of its findings in 2019.[19]

The development of the New York City ordinance offers insights for policy makers into the limitations of efforts to ensure interpretable machines. One ultimately discarded provision of the New York City bill would have required the city to release the source code for any automated decision systems used by public agencies. At first blush, this solution seems to provide a measure of transparency, similar to publishing the rules by which, for instance, a judge may decide whether to hold a defendant in jail pending trial. But experts warned of possible security risks if source code was made publicly available, and companies developing algorithms objected to the enforced disclosure of proprietary information, making the provision politically unfeasible.[20] As the case of Deep Patient illustrates, such a requirement may not even achieve the goal of providing comprehensible reasons for automated decisions—if the Mount Sinai team could itself not explain the outputs of the system it designed, then the program’s source code would clearly not provide sufficient explanation.

A recent working paper from Harvard’s Berkman Klein Center for Internet and Society offers prospective paths for ensuring accountability and interpretability in automated decision systems.[21] The authors point out that engineers of automated decision systems can readily ensure two key components of any legally robust explanation: local explanation—that explanation for an AI system’s specific decision is available beyond an explanation for overall system behavior—and counterfactual faithfulness—the notion that an AI system’s specific decisions can be explained by causality. These two criteria, in brief, would allow for the identification of the relevant factors influencing a final output and the testing of how changes to these relevant factors would affect that output. Crucially, the authors note that these conditions can easily be assessed without requiring the disclosure of source code, mitigating concerns around trade secrets protection.[22]

But perhaps a new lexicon is in order. The now-common phrase “automated decisions” is misleading and implies a machine is both reaching a conclusion and making some choice. Were that the case, policy makers should focus their efforts on machines, using policy to mandate the implementation of engineering features.

This is not, however, how the vast majority of these tools work. Think of criminal risk scoring systems: these tools don’t actually make a decision about whether a defendant is released pending trial or held in detention but rather offer a prediction intended to inform a final, human-made decision. Better human guidance could be the key to fairer, more transparent algorithmic tools. Computer scientist Ben Shneiderman of the University of Maryland has proposed several human-centered oversight mechanisms to better capture the potential of automated systems. A review board, analogous to the National Transportation Safety Board, could mandate the review of logs of algorithmic performance to reconstruct failures in the programs’ impartiality and transparency. A monitoring body along the lines of the Food and Drug Administration could provide oversight over the use and development of governmental automated systems. And transparency can be baked into the system from the beginning, with mandatory “algorithmic impact statements” requiring software developers to publicly disclose the data that feeds into their systems and their expected outputs, making it easier for government regulators to identify erratic and inexplicable outputs.[23]
Powerful computing offers governments tools to use its resources to greater effect. Well-designed risk scoring systems, for instance, allow defendants who otherwise would have spent months in detention during a criminal trial to instead spend time with their families. But these systems should never be thought of as supplanting human judgment.

Ensuring that these tools serve the public good is not only a software engineering problem—it is a democratic problem. Through better institutional design, governments can provide the safeguards necessary to assure citizens that automated decision systems are subject to the same pre-emptive and post facto oversight as previous new technologies like pharmaceuticals and air travel. The machines may seem to be thinking—but only under our watch.

Brendan Roach is a second-year master in public policy candidate at the John F. Kennedy School of Government at Harvard University. He hails from Philadelphia and spent his time before Harvard in Washington, DC, as a consultant to nonprofits and large foundations. At Harvard, his research has focused on the sociopolitical impacts of emerging technologies. He graduated in 2010 from the Edmund A. Walsh School of Foreign Service at Georgetown University, where he majored in culture and politics.

Photo: NASA’s Discover Supercomputer at Goddard Space Flight Center in Maryland / Credit: NASA/Pat Izzo from Flickr


[1] A. M. Turing, “I.—COMPUTING MACHINERY AND INTELLIGENCE,” Mind 59, no. 236 (1950): 433–60, https://doi.org/10.1093/mind/LIX.236.433.

[2] Steven Levy, “The Brief History of the ENIAC Computer,” Smithsonian, November 2013, https://www.smithsonianmag.com/history/the-brief-history-of-the-eniac-computer-3889120/.

[3] Julia Angwin et al., “Machine Bias: There’s software used across the country to predict future criminals. and it’s biased against blacks.,” ProPublica, 23 May 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

[4] Hayley Glatter, “City Councilors Criticize Changes to Boston Public School Start Times,” Boston Magazine, 11 December 2017, http://www.bostonmagazine.com/education/2017/12/11/bps-school-start-time-backlash/.

[5] Mara Hvistendahl, “Inside China’s Vast New Experiment In Social Ranking,” WIRED, 14 December 2017, accessed 7 January 2018, https://www.wired.com/story/age-of-social-credit/.

[6] “GDPR Portal: Site Overview,” EU GDPR Portal, accessed 8 January 2018, http://eugdpr.org/eugdpr.org.html.

[7] “How did we get here?,” EU GDPR Portal, accessed 8 January 2018, http://eugdpr.org/how-did-we-get-here-.html.

[8] “How did we get here?”

[9] “Guide to the General Data Protection Regulation (GDPR),” Information Commissioner’s Office, 22 December 2017, https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/.

[10] Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, 28 December 2016), https://papers.ssrn.com/abstract=2903469.

[11] Wachter, Mittelstadt, and Floridi, “Why a Right to Explanation.”

[12] Roger Parloff, “Why Deep Learning Is Suddenly Changing Your Life,” Fortune, 28 September 2016, accessed 8 January 2018, http://fortune.com/ai-artificial-intelligence-deep-machine-learning/.

[13] Will Knight, “The Dark Secret at the Heart of AI,” MIT Technology Review, 11 April 2017, https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/.

[14] Knight, “The Dark Secrets at the Heart of AI.”

[15] Knight, “The Dark Secrets at the Heart of AI.”

[16] Jeremy Waldron, “Hobbes and the Principle of Publicity,” Pacific Philosophical Quarterly 82, no. 3–4 (2001): 447–74.

[17] Waldron, “Hobbes and the Principle of Publicity.”

[18] Julia Powles, “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable,” The New Yorker, 21 December 2017, https://www.newyorker.com/tech/elements/new-york-citys-bold-flawed-attempt-to-make-algorithms-accountable.

[19] Powles, “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable.”

[20] Powles, “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable.”

[21] Finale Doshi-Velez et al., “Accountability of AI Under the Law: The Role of Explanation,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, 3 November 2017), https://papers.ssrn.com/abstract=3064761.

[22] Doshi-Velez et al., “Accountability of AI Under the Law.”

[23] Ben Shneiderman, “Opinion: The dangers of faulty, biased, or malicious algorithms requires independent oversight,” Proceedings of the National Academy of Sciences 113, no. 48 (2016): 13538–40.