In the last five years, facial recognition has become a battleground for the future of artificial intelligence (AI). This controversial technology encapsulates public fears about inescapable surveillance, algorithmic bias, and dystopian AI. Cities across the United States have banned the use of facial recognition by government agencies and prominent companies have announced moratoria on the technology’s development.
But what does it mean to be recognized? Numerous authors have sketched out the social, political, and ethical implications of facial recognition technology. These important critiques highlight the consequences of false positive identifications, which have already resulted in the wrongful arrests of Black men, as well as facial recognition’s effects on privacy, civil liberties, and freedom of assembly. In this essay, however, I examine how the technology of facial recognition is intertwined with other types of social and political recognition, as well as highlight how technologists’ efforts to “diversify” and “de-bias” facial recognition may actually exacerbate the discriminatory effects that they seek to resolve. Within the field of computer vision, the problem of biased facial recognition has been interpreted as a call to build more inclusive datasets and models. I argue that instead, researchers should critically interrogate what can’t or shouldn’t be recognized by computer vision.
Recognition is one of the oldest problems in computer vision. For researchers in this field, recognition is a matter of detection and classification. Or, as the textbook Machine Vision states, “The object recognition problem can be defined as a labeling problem based on models of known objects.”