The Legal Examiner Affiliate Network The Legal Examiner The Legal Examiner The Legal Examiner search feed instagram google-plus avvo phone envelope checkmark mail-reply spinner error close
Facial Recognition

Facial recognition software flawed, as government use increases

Facial recognition software, often used by law enforcement to identify and track criminals and terrorists, is seriously flawed — just ask Michael Oliver of Detroit, one of several people who have been wrongfully arrested and jailed.

Or ask the National Institute of Standards and Technology. The Department of Commerce agency found in a December 2019 study that facial recognition systems are 100 times more likely to misidentify minorities, particularly Black and East Asian people.

Multiple lawsuits have been filed either seeking millions in damages for false arrest and imprisonment or to sharply curtail the use of facial recognition systems.

Meanwhile, the Department of Homeland Security is planning to expand a pilot facial recognition program to all international transportation hubs. The goal is to capture facial images of every person coming into or leaving the country, whether by plane or ship.

Facial identification technology often yields erroneous results

The facial data of 117 million people is stored in searchable U.S. networks that are accessible to law enforcement.

RELATED: Police reconsider facial recognition, other tech

RELATED: Is your landlord spying on you with “smart home” tech?

A documentary, Coded Bias, that premiered at the Sundance festival in 2020 examined how algorithm-driven artificial intelligence technology that accesses this data often perpetuates racism, sexism and infringes on civil liberties.

The researcher, Joy Buolamwini, found that facial recognition software had difficulty identifying faces of darker-skinned people and distinguishing between male and female faces.

Because of repeated failures of facial recognition software, in June, 2020, Amazon imposed a one-year ban on the sale and use of its facial recognition technology, Rekognition. The only exception is for programs involving the rescue of human trafficking victims and the reunification of missing children with their families.

“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested,” the company said in a news release.

After the police killing of George Floyd in Minneapolis last year, IBM and Microsoft also decided to halt selling face recognition software to police organizations.

In a June 2020 letter to Congress, IBM CEO Arvind Krishna wrote that although technology can increase transparency and help police protect communities, it should not promote discrimination or racial injustice:

IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.  

Krishna is calling for a prohibition against using the technology for “mass surveillance, racial profiling, or violations of basic human rights and freedoms.”

In a test conducted by the American Civil Liberties Union two years earlier, Rekognition misidentified 28 members of Congress as other people who had been arrested for crimes.

“The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.),” according to the ACLU.

There have been bipartisan calls for federal legislation to regulate the technology, but no national laws have gone into effect.

Increasingly states and cities are sharply regulating the use of facial recognition software. In September 2020, Portland, OR, became the first American city to ban both government and private use of facial recognition, according to a Wired article. Other governments that have adopted partial bans include the cities of San Francisco and San Diego in California, as well as Boston and Cambridge in Massachusetts, New Orleans, Louisiana, and throughout public schools in the state of New York.

States continue to debate whether or not to embrace facial recognition technology. In December 2020, Massachusetts Gov. Charlie Baker blocked a legislative provision that would have banned police and public authorities from using facial recognition technology unless they first obtain a warrant to search the state’s driver license database.

Meanwhile, both IBM and Microsoft are now lobbying the new Biden administration to create new rules to govern facial recognition technologies.

COVID 19 masks foiling facial recognition technology

As public security cameras proliferate throughout the country, the standards and technology institute is analyzing their effectiveness in identifying people from their facial images. 

So far, federal studies have found that facial recognition systems have more trouble identifying the faces of people of color much more than those of white people. And of all ethnic groups, these systems produce the highest false positivity rate with Native American faces.

In this age of COVID-19 masking, algorithms used by most facial recognition “continue to give increased false non-match rates” when surveilled people wear masks. 

The most recent study by the Department of Commerce agency, released in July 2020, found that most 65 facial recognition systems have even more difficulty producing accurate results when people are wearing masks that cover much of their faces.

The size and shape of the mask also matters. Full face masks generate twice as many failed identifications as rounder, smaller masks. A mask’s color also contributes to false readings. Black masks result in greater failures than lighter colored masks, according to the NIST.

“Even the best of the 89 commercial facial recognition algorithms tested had error rates between 5% and 50% in matching digitally applied face masks with photos of the same person without a mask,” according to the institute.

“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” said Mei Ngan, an institute computer scientist and an author of the report. 

Since March 2020, the institute says efforts to improve facial recognition software have succeeded somewhat in improving positive capture rates:

“When comparing error rates for unmasked versus masked faces … false rejection rates in (facial recognition) algorithms submitted since mid-March 2020 decreased by as much as a factor of 10 over their pre-pandemic algorithms.”

Arrests resulting from facial recognition targeted by lawsuits 

In July 2019 the Detroit Police Department arrested Michael Oliver, 26, after facial recognition software identified him as the suspect in a larceny. He was jailed for three days before authorities discovered he was innocent.

While incarcerated, he slept on a cement floor and ate bologna sandwiches for breakfast, lunch and dinner. 

Although he looked nothing like the face captured in a video of the criminal incident, police refused to release him.

Eventually, the prosecutor dropped the charges and even apologized. That wasn’t enough for Oliver, who is suing the city and the arresting detective for at least $12 million, claiming the police were “grossly negligent” in their use of a faulty identification based on facial recognition software.

Nijeer Parks, 33, was jailed for 10 days, including four days in solitary confinement, after a facial recognition system identified him as the perpetrator in a 2019 incident at a Hampton Inn in Woodbridge, NJ, according to the Wall Street Journal. Police charged him with shoplifting, assault and drug possession.

After spending $5,000, his life savings, to defend himself, the case against him was dismissed for lack of evidence. Now, he is suing the police and the city for false arrest and imprisonment, emotional distress, racial profiling, use of excessive force by police, and violation of the Constitution’s equal protection clause and the Eighth Amendment’s prohibition against “cruel and unusual punishment.”

Robert Williams’ home arrest in front of his wife, family, and neighbors was the result of a similar misidentification by facial recognition software. Again in Detroit, the 42-year-old Black man was innocent of the shoplifting charge.

The arrest “disrupted his family life, resulted in his unjustified jailing, and violated all norms of reasonable policing and investigation,” according to the ACLU of Michigan. It was not until weeks later that the charges were dropped. Repeated requests for copies of the police investigation report and evidence were unsuccessful.

Another lawsuit is working its way through federal court in Massachusetts. The ACLU is suing the U.S. Department of Justice, the Drug Enforcement Administration and the FBI, charging that the agencies’ surveillance technologies are threatening the privacy and civil rights of American citizens.

“There are likely many more wrongful interrogations, arrests, and possibly even convictions. … Unsurprisingly, all three false arrests that we know about have been of Black men, further demonstrating how this technology disproportionately harms the Black community. Law enforcement use of face recognition technology must be stopped immediately,” Nathan Freed Wessler, an ACLU senior staff attorney, told the website Engadet recently. 

Homeland Security expanding use of facial recognition

Despite growing evidence of faulty identifications by facial recognition software, the federal Department of Homeland Security is proceeding with plans to adopt its use throughout the country.

A DHS pilot program called Simplified Arrival operates at 27 of U.S. airports by Customs and Border Protection compares international travelers entering and exiting the country with previously provided photos as an alternative to examining travel documents.

So far, the program has identified 300 “imposters” out of 55 million travelers from illegally entering the country with false documents, according to a federal report.

According to the DHS, an average 93 percent of the unmasked people and 77 percent of the masked people were correctly identified in a test of 600 volunteers, both masked and unmasked, from 60 countries during a recent biometric technology rally, according to a Zdnet.com article.

The CBP plans to collect faceprints of “all aliens” (foreign visitors) entering and leaving the country. Only U.S. citizens can opt out of the biometric verification program.

The proposal is opposed by a coalition of 16 civil rights groups led by the ACLU. The groups have filed a formal objection saying the CBP program “would pose grave risks to privacy and civil liberties — including harms from law enforcement agencies in the United States and foreign government agencies with which faceprints are shared.”