Facial Recognition Adoption Remains Unchecked Even as Government Warns of Anti-Minority Bias

Posted on January 3, 2020

A new piece by Motherboard reports that the US National Institute of Standards and Technology (NIST) has issued a warning that facial recognition technology remains dangerously biased against those of non-white racial backgrounds, women, and other historically marginalized groups. Specifically, the study by NIST found that these groups are misidentified by most currently available commercial facial recognition software at as much as 100 times the rate at which white male faces are misidentified.

As part of their analysis, NIST reviewed almost 200 facial recognition products and tested them on 18 million faces available in federal government databases. This sheer volume of data lends significant credibility to NIST’s conclusions.

The study also reinforces the urgency of calls by civil libertarians to impose limits on facial recognition as its adoption only accelerates. For one thing, there are already enough local jurisdictions with a sprawling camera infrastructure in place that can easily be augmented with facial recognition algorithms. Such camera networks not only include conventional CCTV networks but also new ones like the rapidly growing ones posed by home surveillance products, most notably the Ring doorbell camera. Law enforcement around the country have already begun clamoring for Ring’s developers to add facial recognition to its product. And considering the unethically cozy, even symbiotic, relationship that local police and Ring have (which Motherboard also reported), it is not out of the question that Ring may indulge this request.

Perhaps more alarming is the degree to which local law enforcement can sidestep regulation governing facial recognition. Despite laws prohibiting biometric data collection and facial recognition deployment without citizens’ consent, law enforcement in Marbella, Spain have gone ahead with facial recognition unobstructed. They have avoided the ire of courts by simply arguing that the software would be analyzing data that any police officer on patrol would be party to, without addressing the depth of scrutiny and unfailing memory of the algorithmic policing posed by facial recognition. If Spanish authorities can so easily circumvent legal limits on facial recognition, it is conceivable that law enforcement in the US could as well with, at most, only marginally more difficulty.

The harms that stem from the disproportionate misidentification of minority groups cannot be overstated. Especially in light of the fact that non-white racial groups are already over-policed, the greatly reduced accuracy when analyzing subjects of these backgrounds can lead to, at a minimum, increased legal burden to prove wrongful identification and, at maximum, wrongful incarceration.

With so much at stake, civil liberties campaigners would do well to underscore the NIST report in pressuring legislative bodies at all levels of government to adopt durable protections against facial recognition abuse, especially as this is an election year.

You can read the full article from Motherboard here.