Skip to main content

Kennedy School Review

Topic / Science, Technology and Data

Regulating the Use of Facial Recognition Technology

Facial recognition technology (FRT) has made rapid progress and is gaining more attention—both from potential users as well as from the general public. Both the private sector and public authorities across the U.S., especially law enforcement, increasingly use FRT. As the technology continues to improve, its diffusion will only accelerate.

However, there are ample concerns about FRT’s adverse effects on human and civil rights. Facial recognition systems have been shown to oftentimes be inaccurate, discriminatory, or privacy-invasive. Because of this, calls for regulation of the technology have grown louder, with opposition to the current laissez faire approach being voiced by a wide range of stakeholders. Some even call for a blanket ban of FRT. Facing public pressure, Microsoft and Amazon recently announced to temporarily stop supplying law enforcement with FRT. IBM has suspended their work on FRT altogether.

Recently, some local and state laws and ordinances restricting the use of FRT have been implemented—but so far, no federal legislation has been passed. And federal regulation is needed—not only to prevent a patchwork of regulations at different levels of government, but also to protect individuals from harm, to protect rights and biometric data, and to create transparency and accountability among organizations deploying FRT.

This article provides a brief overview of the debate surrounding FRT as well as the question how it should be regulated and describes a potential framework to address concerns around FRT.

FRT can be inaccurate, biased, and privacy-invasive

Facial recognition systems identify faces using image data such as photos or video and match these faces against a database of images (known as facial identification) or against images of a specific person (facial verification). Such systems have been used in both the public and private sector for years—be it to unlock phones, identify and find suspects in criminal investigations, detect stalkers at concerts, or to identify celebrities on the red carpet (Canon 2019; Garvie, Bedoya, and Frankle 2016; Singer 2018). But the use of FRT is problematic for several reasons.

First, facial recognition systems do not always perform as well as promised. Trials in Berlin, London, and Cardiff have yielded mixed results, producing large numbers of false-positive matches (Coleman 2019; Dachwitz 2018; Fussey and Murray 2019). High error rates, in turn, can produce substantial harm. For instance, in law enforcement, misidentification can result in wrongful stops, interrogations, or detainment because of mistaken identity. Such errors can create severe effects on people’s mental and physical health—and, at worst, wrongful incrimination.

Second, several studies have shown that facial recognition systems are far less accurate when presented with images of people of color or images of women—and especially inaccurate with images of women of color (Buolamwini and Gebru 2018; Snow 2018; Grother, Ngan, and Hanaoka 2019). One key reason is that white and male faces are frequently over-represented in the datasets used to train facial recognition algorithms. Thus, FRT may disproportionately and adversely affect individuals who are already historically marginalized or discriminated against.

Finally, FRT poses a threat to individual privacy. The technology offers ample opportunities for both public and private actors to use video cameras in public spaces for either surveillance or commercial gain. This includes the tracking of individuals, groups, or the public at large. The potential consequences of such abuse become evident in the province of Xinjiang, China, where the government extensively uses FRT to profile, surveil, and oppress the Muslim Uyghur minority (Buckley, Mozur, and Ramzy 2019; Byler 2019).

The protection of individual privacy, of course, is an essential aspect of human dignity and autonomy. As the legal scholar Edward J. Bloustein wrote: “The man [sic] who is compelled to live every minute of his life among others and whose every need, thought, desire, fancy or gratification is subject to public scrutiny, has been deprived of his individuality and human dignity” (Bloustein 1964, 1003). Further, mass surveillance can have notable chilling effects on free speech and individual behaviors and infringe on a person’s right to be protected from unreasonable searches.

Widespread adoption of FRT should thus give policymakers reason for concern. A 2016 study found that FRT is already frequently used by law enforcement agencies across the U.S. and that “few agencies have instituted meaningful protections to prevent the misuse of the technology,” (Garvie, Bedoya, and Frankle 2016, 1). There is little evidence to suggest that the situation has since improved, if not the opposite. But concerns do not only apply to the public sector. For instance, recent revelations around the FRT company Clearview AI—an app that matches faces to a vast database of images scraped from the internet (mostly social media)—demonstrate the potential for abuse not only by law enforcement, but also by companies and individuals (Mac, Haskins, and McDonald 2020a; 2020b). Importantly, as the use of FRT is scaled up, harms that seem small individually can aggregate and eventually reach a substantial societal magnitude.

Therefore, clearly, legal boundaries must be set for the use of FRT—and they must meet a range of requirements.

FRT use must cause no harm

Any attempt at regulating FRT must first and foremost pursue the goal of preventing harm. Put another way: people and groups should not be worse off because of FRT use than they were before. And, no specific group within society should be worse off than any other because of FRT. An adequate regulatory framework must therefore address the three main sources of harm potentially caused by FRT: inaccuracy, bias, and invasions of privacy.

To address accuracy issues, governments must ensure that facial recognition systems meet certain accuracy benchmarks—for example, to produce the same (or superior) error rates as human judgement. Further, to counteract bias, governments might require similar error rates across different sub-populations—for instance, as identified by gender or skin color. Finally, governments must ensure that organizations do not deploy FRT in such a way that would lead to a disproportionate invasion of privacy—such as constant monitoring—or that would suppress free movement and speech.

FRT use must be transparent and auditable

Transparency and the ability to audit facial recognition systems are paramount in verifying the harmlessness of a system and ultimately in holding organizations accountable. Organizations using FRT must be required to produce verifiable documentation and to make it accessible for both regulators and the general public. Such disclosures should include information on the system’s exact purpose, error rates, and any differences in error rates across sub-populations. In addition, mechanisms should exist to allow for independent testing of facial recognition systems by regulators and independent auditors.

Biometric data must be protected

Regulating FRT is inextricably linked to data protection law as facial recognition applications require storing and processing biometric data—in this case, images of individuals’ faces. Therefore, there can be no regulation of facial recognition that truly protects individual rights without sufficient data protection legislation.

Currently, there is no federal data protection law in the U.S. However, a small number of states have enacted biometric privacy laws granting individuals varying degrees of control over their biometric data, and in the case if Illinois, a private right to action in case of violations (Prescott 2020). Further, the California Consumer Privacy Act treats biometric data like all other types of personal data, thus providing at least some protections for it.

To protect people’s privacy, the collection and processing of biometric information must be conditional on meaningful consent—even in the case of legitimate, harmless use-cases of FRT. Where such consent cannot be genuinely obtained—such as video surveillance in public spaces—biometric information should not be collected. A look at the European Union’s General Data Protection Regulation might be instructive: it imposes far-reaching restrictions on the processing of biometric data, with only few exceptions given based on public interest or an individual’s consent.

Existing legislation and current proposals regarding FRT are inadequate

The push for regulation of FRT first gained momentum at the local level: in 2019 and 2020, several cities, including San Francisco and Boston, banned agencies from using FRT. Similarly, multiple states have introduced legislation seeking to ban government or law enforcement use of FRT while other states have prohibited the use of FRT on police body camera footage. In March 2020, Washington signed into law the first state-level bill pertaining to the use of FRT more broadly. The law allows use by government agencies under a specific set of conditions requiring transparency, independent testing, and human review—with additional restrictions for law enforcement use.

At the federal level, there are currently eleven bills in Congress promising to restrict the use of FRT—some focused on private sector use and others focused on use by law enforcement. All of them are currently still in committee and unlikely to pass, as they lack bipartisan support and the current congressional term is near its end. At the same time, many voices from civil society and academia object to these more permissive approaches to regulating FRT. Instead, they are calling for outright bans or moratoria (Selinger and Hartzog 2019; Stark 2019).

All these regulations and proposals would be an improvement compared to the laissez-faire status quo—but still they have substantial shortcomings. Approaches limiting their scope to either the public or the private sector fail to acknowledge that FRT can be used for abuse and cause harm in both sectors. Similarly, proposals targeting only law enforcement neglect potential harms resulting from other government uses of FRT. To be effective, restrictions on the use of FRT should apply to all public and private-sector organizations. Further, whereas some laws and proposals require a certain degree of transparency and accountability, none of them formulate any ex ante performance benchmarks that would need to be met before deployment of a system. Thus, they lack any real mechanism to prevent technically inadequate facial recognition systems from being used.

Many advocates of a blanket ban of FRT make the argument that only complete bans will prevent us from opening a “Pandora’s box” and near-guaranteed ubiquity of FRT (see, for example, Stark 2019). However, such a binary approach discounts the fact that it is possible to limit the deployment of FRT with a ban while allowing few, select use cases consistent with the principles outlined above. Although some risk will always remain, one can conceive of applications that may be deemed harmless—for example, applications improving accessibility for the visually impaired or applications helping TV broadcasters identify public figures at an event.

In light of this analysis, none of the existing regulations and proposals sufficiently protect people from harm while simultaneously allowing legitimate, harmless applications of FRT. I therefore propose a multi-stakeholder regulatory framework that addresses these concerns.

FRT requires a default ban—but with multi-stakeholder identified exceptions

It is unlikely that policymakers will be able to design a set of criteria based on which it is possible to ex ante identify all legitimate, harmless applications of FRT. Further, not all harmful uses of FRT can be identified today. Therefore, the decision whether certain applications should be allowed or prohibited should be made on a case-by-case basis. The approach put forward here reverses the logic of most regulatory proposals—i.e., banning FRT use in certain domains—and instead makes prohibition the default while providing a mechanism allowing for some exceptions.

Such a case-by-case approval mechanism could take the form of a multi-stakeholder committee representing industry, government, academia, and civil society. The committee’s mission would be to review applications for exceptions. In the U.S., such a committee could be placed under the auspices of the National Telecommunications and Information Administration (NTIA) and should be comprised in equal parts of members representing civil society (such as human and civil rights) organizations, FRT users (from both industry and government), as well as independent researchers.

Applications submitted from organizations seeking to implement FRT would contain information on the specific use-case. Providing explanations of how the use-case in question will not result in harm or violations of civil and human rights is likewise the burden of the requesting organization. Finally, organizations would also propose benchmarks pertaining to error-rates and error-rate differences.

The committee would subsequently assess and deliberate over whether an application meets the criterion of harmlessness and whether appropriate benchmarks have been chosen. If, by qualified majority, the committee reaches a positive decision, an exception and documentation would be added to a public register.

Such a public register serves two functions: first, publishing exceptions would help avoid duplicate applications; and second, it would create transparency, allowing third parties to scrutinize use-cases or identify applications for audit. The committee should also be able to extend exceptions to an entire sector or category of use following the review of an individual application. Some exceptions, such as FRT use for academic research or existing, harmless widespread applications could be granted from the beginning.

After exceptions are granted, those deploying FRT would have to comply with additional reporting requirements and provide application programming interfaces enabling independent testing. As discussed above, a law implementing such a multi-stakeholder governance system would also need to be accompanied by comprehensive (biometric) privacy legislation.

Failing to act on FRT results in harm and violations of civil and human rights

Advances-in and increasing use-of FRT have brought myriad concerns about the risks of such technology to the forefront of public debate. To prevent potential harm, discrimination, or other violations of civil and human rights brought about by the unregulated use of FRT, comprehensive federal legislation is needed now. Any prospective regulatory framework must meet three criteria: harmlessness, transparency and accountability, and the protection of biometric data—while also allowing for legitimate innovation in the space. Existing regulations and proposals fail to meet these criteria.

The multi-stakeholder approach that is needed would constitute an initial ban of FRT—both in the public and private sector—with a mechanism to grant exceptions. Following a multi-stakeholder governance model, a committee of experts from industry, government, academia, and civil society would be tasked with reviewing applications from organizations seeking to deploy FRT. Such an approach ensures that concerns from across society are taken into consideration and that harmful and discriminatory applications would not be put into use. Further, listing exceptions and providing documentation in a public register would foster transparency and accountability.

By adopting such a regulatory framework, the U.S. would not only prevent harm and uphold the rights of its people—but also re-assert the public’s control over technology instead of deferring to the morality of the open market.

 

Bibliography

Bloustein, Edward J. 1964. “Privacy as an Aspect of Human Dignity: An Answer to Dean Prosser.” New York University Law Review, no. 39: 962–1007.

Buckley, Chris, Paul Mozur, and Austin Ramzy. 2019. “How China Turned a City Into a Prison.” The New York Times, April 4, 2019, sec. World. https://www.nytimes.com/interactive/2019/04/04/world/asia/xinjiang-china-surveillance-prison.html, https://www.nytimes.com/interactive/2019/04/04/world/asia/xinjiang-china-surveillance-prison.html.

Buolamwini, Joy, and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Conference on Fairness, Accountability and Transparency, 77–91. http://proceedings.mlr.press/v81/buolamwini18a.html.

Byler, Darren. 2019. “China’s Hi-Tech War on Its Muslim Minority.” The Guardian, April 11, 2019, sec. News. https://www.theguardian.com/news/2019/apr/11/china-hi-tech-war-on-muslim-minority-xinjiang-uighurs-surveillance-face-recognition.

Canon, Gabrielle. 2019. “How Taylor Swift Showed Us the Scary Future of Facial Recognition.” The Guardian, February 15, 2019, sec. Technology. https://www.theguardian.com/technology/2019/feb/15/how-taylor-swift-showed-us-the-scary-future-of-facial-recognition.

Coleman, Clive. 2019. “Police Facial Recognition Surveillance Court Case Starts.” BBC News, May 21, 2019, sec. UK. https://www.bbc.com/news/uk-48315979.

Dachwitz, Ingo. 2018. “Überwachungstest am Südkreuz: Geschönte Ergebnisse und vage Zukunftspläne.” Netzpolitik.org (blog). October 16, 2018. https://netzpolitik.org/2018/ueberwachungstest-am-suedkreuz-geschoente-ergebnisse-und-vage-zukunftsplaene/.

Fight for the Future. n.d. “Ban Facial Recognition.” Ban Facial Recognition. Accessed April 18, 2020. https://www.banfacialrecognition.com.

Fussey, Pete, and Daragh Murray. 2019. “Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology.” Colchester: Human Rights Centre, University of Essex. https://48ba3m4eh2bf2sksp43rq8kk-wpengine.netdna-ssl.com/wp-content/uploads/2019/07/London-Met-Police-Trial-of-Facial-Recognition-Tech-Report.pdf.

Garvie, Clare, Alvaro M Bedoya, and Jonathan Frankle. 2016. “The Perpetual Line-Up – Unregulated Police Face Recognition in America.” Washington, D.C.: Center on Privacy & Technology at Georgetown Law. https://www.perpetuallineup.org/.

Grother, Patrick, Mei Ngan, and Kayee Hanaoka. 2019. “Face Recognition Vendor Test – Part 3: Demographic Effects.” NIST IR 8280. Gaithersburg, MD: National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8280.

Mac, Ryan, Caroline Haskins, and Logan McDonald. 2020a. “Clearview’s Facial Recognition App Has Been Used By The Justice Department, ICE, Macy’s, Walmart, And The NBA.” BuzzFeed News, 2020. https://www.buzzfeednews.com/article/ryanmac/clearview-ai-fbi-ice-global-law-enforcement.

———. 2020b. “Secret Users Of Clearview AI’s Facial Recognition Dragnet Included A Former Trump Staffer, A Troll, And Conservative Think Tanks.” BuzzFeed News, 2020. https://www.buzzfeednews.com/article/ryanmac/clearview-ai-trump-investors-friend-facial-recognition.

Prescott, Natalie A. 2020. “The Anatomy of Biometric Laws: What U.S. Companies Need To Know in 2020.” The National Law Review (blog). 2020. https://www.natlawreview.com/article/anatomy-biometric-laws-what-us-companies-need-to-know-2020.

Selinger, Evan, and Woodrow Hartzog. 2019. “Opinion | What Happens When Employers Can Read Your Facial Expressions?” The New York Times, October 17, 2019, sec. Opinion. https://www.nytimes.com/2019/10/17/opinion/facial-recognition-ban.html.

Singer, Natasha. 2018. “Microsoft Urges Congress to Regulate Use of Facial Recognition.” The New York Times, July 13, 2018, sec. Technology. https://www.nytimes.com/2018/07/13/technology/microsoft-facial-recognition.html.

Snow, Jacob. 2018. “Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots.” American Civil Liberties Union (blog). 2018. https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28.

Stark, Luke. 2019. “Facial Recognition Is the Plutonium of AI.” XRDS: Crossroads, The ACM Magazine for Students 25 (3): 50–55. https://doi.org/10.1145/3313129.

 

Maximilian Gahntz is passionate about promoting equity and accountability in the development and deployment of technology. He’s an incoming fellow working on artificial intelligence and digital rights with the Mercator Fellowship on International Affairs, a postgraduate program sponsored by the German Foreign Office. Max holds Master’s degrees in Public Administration and Public Policy from Columbia University and Sciences Po Paris as well as a Bachelor from the University of Konstanz, Germany. You can find him on Twitter under @mgahntz.

 

 

Edited by: Derrick Flakoll

Photo by: Proxyclick Visitor Management System