The Citizen Question: Making Identities Visible Via Facial Recognition Software at the Border

By on March 1st, 2021 in Artificial Intelligence (AI), Ethics, Human Impacts, Magazine Articles, Social Implications of Technology, Societal Impact

In late 2018, Jet Blue, American Airlines, and Delta began utilizing facial recognition software (FRS) in American airports: Jet Blue explained that the company had, “in partnership with U.S. Customs and Border Protection (CBP), today announced the roll-out of its first fully-integrated biometric self-boarding gate.” The airlines’ cooperation with the CBP, the Department of Homeland Security, and the Transportation Security Agency is part of a larger program titled the Biometric Air Exit (BAE) aimed at “biometrically [verifying] the departure of foreign nationals from the United States.” In this way, the face included in a passport or visa, and/or included in mechanisms like the BAE, becomes the citizen’s way of “proving” their worth as a citizen and their “belonging” to the membership of that nation. Examining how FRS is used to identify and sort citizenship within mechanisms like the BAE is immensely important; alongside this, the process of how “citizen” and “noncitizen” is defined, as data points within larger mechanisms like the BAE, need to be made transparent. The categories of “citizen” and “non-citizen,” as sortable data categories, are not objective and are vulnerable to political purposes; building FRS with such categories at their cores leaves FRS open to manipulation by political systems. Increasing transparency throughout systems like FRS serves dual purposes: it will generate informed and effective policy and regulation of the technology; and it will clarify the political motivations behind labeling certain individuals and populations as citizens and others as non-citizens

To read this article, join SSIT, and click HERE.

 

Author Information

Aaron Tucker is a Ph.D. student at York University in Cinema and Media Studies, Toronto, Ontario, Canada.