We know that face masks help protect others from Covid-19, and it looks like they also provide some protection against facial recognition technology — for now. A preliminary study from the National Institute of Standards and Technology analyzed how well the technology fared when identifying people wearing face masks. Broadly speaking, the facial recognition algorithms designed before the pandemic struggled to recognize faces behind the masks.
The new government study reveals less about how poorly facial recognition algorithms deal with face masks than they do about how companies are already hard at work building algorithms that can adapt to new situations. The pandemic is showing how face mask adoption might end up making facial recognition technology even more powerful than it was before.
“The good news here is very short-lived,” Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, told Recode. “This just highlights that there’s a global arms race right now to develop facial recognition software that can track people, even when we are wearing masks.”
The error caused by mask-wearing isn’t too surprising. Anyone who’s tried to unlock their iPhone with Face ID while wearing a mask knows that the technology fails in the new scenario. Facial recognition algorithms are generally trained to identify you based on aspects of your facial geometry, and a face mask hides a huge portion of what the algorithm is trying to analyze, namely your nose and mouth, the NIST researchers explain.
The extent to which face masks can trip up algorithms has been serious enough that, amid the George Floyd protests, the Department of Homeland Security sent out a notice in May warning that “violent adversaries” of law enforcement could take advantage of mask-wearing to avoid being spotted by facial recognition. Of course, protesters themselves were concerned about the exact same surveillance technologies being used to threaten their civil liberties.
Now, the NIST research serves as evidence that masks are a real stumbling block for some facial recognition systems. The non-regulatory agency’s research looked at 89 facial recognition algorithms, including those from Panasonic and Samsung, and analyzed their performance on images of 1 million people. The study used photographs of people that were collected when crossing the United States border as well as images that had been included in applications for immigration benefits. The first group of photos was then “digitally masked,” meaning that artificial shapes in various colors that mimicked masks were superimposed on the images of faces, obscuring the subject’s nose, mouth, and part of their cheeks.
The NIST study found that wearing masks can reduce the accuracy of facial recognition algorithms, and according to the agency’s press release, “the best of the 89 commercial facial recognition algorithms tested had error rates between 5% and 50% in matching digitally applied face masks with photos of the same person without a mask.” Some vendors’ algorithms performed better than others, and performance varied based on the shape and color of the mask. Generally, facial recognition is more accurate when applied to people wearing round masks, while algorithms could be less accurate when the subjects “wore” black masks, compared to a light blue mask.
Generally, this would seem like good news for those who are worried about their privacy and interested in finding ways to spoof facial recognition technology. But again, these types of errors are likely temporary, as companies that produce facial recognition technology are racing to update their algorithms to better adapt to face coverings. As Recode previously reported, firms were already touting their algorithms’ ability to account for masks as early as February, and Panasonic indicated it had cracked the mask problem even earlier. Since the pandemic started, a slew of facial companies, including UK-based Facewatch, California-based Sensory, and the China-based firms Hanwang and SenseTime have all begun to tout their ability to recognize people wearing masks.
“I do think that this is a solvable problem, and that it will require continued research and development efforts to close the accuracy gap,” Shaun Moore, the CEO of TrueFace, whose technology was evaluated in the NIST study, said in an email. “The more (mask) data that we are able to train our algorithms on the better the performance will be.”
Fox Cahn, from the Surveillance Technology Oversight Project, offered a more dystopian interpretation of what’s to come. He dismissed the idea that concepts like anti-facial recognition shirts and make-up will be no match for facial recognition technology in the future. “We’ll get to the point where the cameras are so prolific — and the technology is so powerful,” he said, “that anything short of a full bodysuit is going to be trackable.”
NIST also hinted that the struggles of the technology they reviewed are short-lived. One of the authors of the NIST report, computer scientist Mei Ngan, said the researchers “expect the technology to continue to improve” in identifying mask-wearing subjects. Accordingly, NIST plans to consider more algorithms that have been updated in order to recognize people wearing masks in its next round of research. Meanwhile, independent researchers are using photos of people wearing masks posted online to build databases of images intended to help improve their facial recognition algorithms, as CNET reported in May.
Masks aren’t the first time facial recognition has been noted for inaccuracies. For years, facial recognition systems have been flagged for being disproportionately inaccurate on women, people of color, and especially women with darker skin. Lauren Sarkesian, a senior policy counsel at the think tank New America’s Open Technology Institute, told Recode that the issue of masks and facial recognition serves as a reminder that the technology remains broadly unregulated in the United States, and we often don’t even know when it’s in use. While some localities have passed laws regulating or banning government use of the technology, there is still no national law regulating facial recognition, though there are several proposals.
“This technology is dangerous — both when it works and when it doesn’t,” Sarkesian said, “because as these accuracy issues are resolved in the algorithms, the surveillance power of the facial recognition technology grows.”
Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.
Support Vox’s explanatory journalism
Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Vox’s work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources — particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.
View original article here Source