Cities all across the US have passed bans on facial recognition, with variations in how strong the regulations are. Though Portland, Oregon, banned facial recognition from all government and commercial use, others are only limiting it from police use.
Some cities, like Detroit, have enacted lighter measures, such as allowing facial recognition to be used only when investigating violent crimes, while police in New York have been able to use the technology for crimes like shoplifting.
On Oct. 9, a New York judge decided in a package-theft case that facial recognition identification could be submitted as evidence in the trial, but he noted that lawmakers should set limits on how the technology could be used.
Those sorts of issues, and the intrusiveness of facial recognition generally, have prompted widespread calls for regulation, but there's debate among technology companies, lawmakers and civil rights groups on where to draw the line.
The US has no federal regulations on facial recognition, leaving thousands of police departments to determine their own limits. Advocates say that's a concern for civil liberties. While some members of Congress propose an indefinite nationwide ban on police use, other bills suggest it could still be allowed with a warrant, or they prevent only businesses from using it.
Police often frame facial recognition as a necessary tool to solve the most heinous crimes, like terrorist attacks and violent assaults, but researchers have found that the technology is more frequently used for low-level offenses.
In a recent court filing, the NYPD noted that it's turned to facial recognition in more than 22,000 cases in the last three years.
"Even though the NYPD claims facial recognition is only used for serious crimes, the numbers tell a different story," said Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project. "As facial recognition continues to grow, it's being routinely deployed for everything from shoplifting to graffiti."
Asked for comment, an NYPD spokeswoman pointed to a 2019 opinion article by police commissioner James O'Neill titled "How Facial Recognition Makes You Safer." In the piece, O'Neill talked about how facial recognition had been used to make arrests in murder, robbery and rape cases, but he didn't disclose how often it was used for low-level crimes.
The department's facial recognition policy, established in March, allows the technology to be used for any crime, no matter the severity. Without any limits, police have more frequently used the technology for petty thefts than the dangerous crimes, privacy advocates say.
Before $12 shoplifting case in Oregon in 2018. Those cases aren't highlighted in Amazon's marketing material, which plays up how the technology is used to find leads on the victims of human trafficking., the program was used in a
At The Wall Street Journal's Tech Live virtual conference on Oct. 20, Hoan Ton-That, CEO of , said it isn't the company's responsibility to make sure its technology is being properly used by its thousands of police partners.
Though the company has its own guidelines, Ton-That said Clearview AI wouldn't be enforcing them, saying that "it's not our job to set the policy as a tech company."
Facial recognition without limits
Before Detroit established its facial recognition policy, the technology led to the wrongful arrests of at least two Black men in the city -- both falsely accused of being involved in theft cases.
Robert Williams was arrested in January and accused of stealing about $3,800 worth of watches after Detroit's facial recognition falsely matched surveillance footage to his driver's license photo. In May 2019, the same facial recognition program wrongly identified Michael Oliver in a larceny case.
, with researchers finding that the and women.
When it's able to be used without limits by police departments, the technology increases the chances of mistakes and threatens privacy, said Andrew Guthrie Ferguson, author of The Rise of Big Data Policing and a law professor at the University of the District of Columbia.
"Facial recognition should never be used for misdemeanor or low-level felony cases," Ferguson said. "Technology that can destroy privacy in public should be used sparingly and under strict controls."
Without any limits, police can use facial recognition however they please, and in many cases, arrested suspects don't even know that the flawed technology was used.
Williams didn't know that Detroit police used facial recognition to find him, until an investigator mentioned the detail during their conversation. Attorneys representing protesters in Miami didn't know that police used facial recognition in their arrests, according to an NBC Miami report. Police used facial recognition software in a $50 drug dealing case in Florida in 2016 but made no mention of it in the arrest report.
In a paper published in October 2019, Ferguson recommended limiting facial recognition to serious felonies, similar to how police restrict the use of wiretapping. He said it's dangerous to assume police should be allowed to use technology as they wish, saying it could damage people's privacy in the long run.
"That assumption is based on valuing the cost of crime higher than the cost to privacy, security and a growing imbalance of police power," Ferguson said. "Prosecuting low-level crimes at the expense of creating an extensive surveillance system may not be the balance society needs."
A full ban
Limits would be a welcome start, but privacy advocates argue they're not enough.
Activists in Detroit are still working to get facial recognition banned in the city after the City Council voted to renew its contract in September. The police department's limits on reserving facial recognition for violent crimes came only after months of protests, said Tawana Petty, director of the Data Justice Program for the Detroit Community Technology Project.
Because of the technology's track record for mistakes, she said, any use of it, even under the strictest regulations, leaves the potential for false arrests.
In a City Council meeting in June, Detroit's police chief, James Craig, said the facial recognition software misidentified people 96% of the time without human intervention. By Oct. 12, the police had used facial recognition on Black people about 97% of the time, according to the department's weekly report.
"My stance is that there is potential to lock people up for violent crimes they didn't commit," Petty said. "The technology is a dangerous assault on the civil liberties and privacy rights we all deserve to have protected."