is a blossoming field of technology that is at once exciting and problematic. If you've ever unlocked your
by looking at it, or asked Facebook or
to go through an unsorted album and show you pictures of your kids, you've seen facial recognition in action.
Whether you want it to or not, facial recognition (sometimes called simply "face recognition") is poised to play an ever-growing role in your life. Your face could be scanned at airports or concerts with or without your knowledge. You could be targeted by personalized ads thanks to cameras at shopping malls. Facial recognition has plenty of upside. The tech could help
gadgets get smarter, sending you notifications based on who it sees and offering more convenient access to friends and family.
But at the very least, facial recognition raises questions of privacy. Experts have concerns ranging from the overreach of law enforcement, to systems with hidden racial biases, to hackers gaining access to your secure information.
Over the next few weeks, CNET will be diving into facial recognition with in-depth pieces on a wide variety of topics, including the science that allows it to work and the implications, both positive and negative, for many of its applications. To get you up to speed, here's a brief overview including what facial recognition is, how it works, where you'll find it in use today, as well as a few of the implications of this rapidly expanding corner of technology.
What is facial recognition?
Facial recognition is a form of biometric authentication, which uses body measurements to verify your identity. Facial recognition is a subset of biometrics that identifies people by measuring the unique shape and structure of their faces. Different systems use different techniques, but at its core, facial recognition uses the same principles as other biometric authentication techniques, such as fingerprint scanners and voice recognition.
How does facial recognition work?
Watch this: Facial recognition: Get to know the tech that gets to know you
All facial recognition systems capture either a two- or three-dimensional image of a subject's face, and then compare key information from that image to a database of known images. For law enforcement, that database could be collected from mugshots. For smart home cameras, the data likely comes from pictures of people you've identified as relatives or friends via the accompanying app.
Woodrow "Woody" Bledsoe first developed facial recognition software at a firm called Panoramic Research back in the 1960s using two-dimensional images, with funding for the research coming from an unnamed intelligence agency.
Even now, most facial recognition systems rely on 2D images, either because the camera doesn't have the ability to capture depth information -- such as the length of your nose or the depth of your eye socket -- or because the reference database consists of 2D images such as mugshots or passport photos.
2D facial recognition primarily uses landmarks such as the nose, mouth and eyes to identify a face, gauging both the width and shape of the features, and the distance between the various features of the face. Those measurements are converted to a numerical code by facial recognition software, which is used to find matches. This code is called a faceprint.
This geometric system can struggle with different angles and lighting. A straight-on shot of a face will show a different distance from nose to eyes, for instance, than a shot of a face turned to the side. The problem can be somewhat mitigated by mapping the 2D image onto a 3D model and undoing the rotation.
Adding a third dimension
3D facial recognition software isn't as easily fooled by angles and light and doesn't rely on average head size to guess at a faceprint. With cameras that sense depth, the faceprint can include the contours and curve of the face as well as depth of the eyes and distances from points like the tip of your nose.
Most cameras gauge this depth by projecting invisible spectrums of light onto a face and using sensors to capture the distance of various points of this light from the camera itself. Even though these 3D sensors can capture much more detail than a 2D version, the basis of the technology remains the same -- turning the various shapes, distances and depths of a face into a numerical code and matching that code to a database.
If that database consists of 2D images, software needs to convert the 3D faceprint back to a 2D faceprint to get a match.
Apple's Face ID uses 30,000 infrared dots that map the contours of your face. The iPhone then remembers the relative location of those dots the next time you try to unlock your phone.
Even these more advanced systems can be defeated by something as simple as different facial expressions, wearing glasses or scarves that obscure parts of your face. Apple's Face ID can struggle to match your tired, squinting, just-woke-up face to your made-up, caffeinated, ready-for-the-day face.
Watch this: The Property Brothers can trick Face ID
Reading your pores
A more recent development, called skin texture analysis, could help future applications overcome all of these challenges. Developed by Identix, a tech company focused on developing secure means of identification, skin texture analysis differentiates itself by functioning at a much smaller scale. Instead of measuring the distance between your nose and your eyes, it measures the distance between your pores. It then converts those numbers into a mathematical code. This code is called a skinprint.
This method could theoretically be so precise that it can tell the difference between twins. Currently, Identix is working to integrate into facial recognition systems alongside a more normal 3D face map. The company claims the tech increases accuracy by 25 percent.
Where is facial recognition being used?
While Bledsoe laid the groundwork for the tech, modern facial recognition began in earnest in the 1980s and '90s thanks to mathematicians at MIT. Since then, facial recognition has been integrated into all manner of commercial and institutional applications with varying degrees of success.
The Chinese government uses facial recognition for large-scale surveillance in public CCTV cameras, both to catch criminals and monitor the behavior of all individuals with the intent of turning the data into a score. Seemingly harmless offenses like buying too many
or jaywalking can lower your score. China uses that score for a sort of "social credit" system that determines whether the individual should be allowed to get a loan, buy a house or even much simpler things like board a plane or access the internet.
Facial recognition could have large implications for retail outlets and marketers as well, beyond simply watching for thieves. At
2019, consumer goods giant Procter & Gamble showed a concept store where cameras could recognize your face and make personalized shopping recommendations.
Watch this: Nest Hello video doorbell: Smarter than your average buzzer
Connected cams compare faces with others they've seen before so you can customize notifications based on who the camera sees. All the models we've tested take a while to learn faces, as they need to be able to recognize the members of your household at various angles and in various outfits. Once the cameras learn, you can use facial recognition to make your connected security system that much smarter by making your notifications more relevant to what you actually want to know.
Beyond the security uses in the home, even robots like Lovot and Sony's Aibo robot dog can recognize faces. Aibo and others learn faces not to track who comes and goes, but to adapt to the specific preferences of different people over time.
Watch this: My week living with Aibo, Sony’s robot dog
What are the implications?
Unlike other forms of biometric authentication, cameras can gather information about your face with or without your knowledge or consent. If you're a privacy-minded person, you could potentially be exposing your data when in a public place without knowing it.
According to a report by Buzzfeed, the US Customs and Border Protection agency plans to implement facial recognition to verify the identity of passengers on international flights in airports across the country. The Electronic Privacy Information Center shared documents with Buzzfeed that suggested the UCB skipped over gathering public feedback before starting to implement these systems, and that they have a questionable accuracy rate and little established privacy regulations as far as what the airlines can do with this facial data after they collect it.
NBC News reported that the databases of pictures used to improve facial recognition often comes from social media sites without the consent of the subject or photographer. Companies like IBM have the stated goal of using these images to try and improve the accuracy of facial recognition, particularly among people of color. Theoretically, by ingesting the data from a large catalog of faces, the system can fine tune its algorithms to account for a larger variety of facial structures.
The Electronic Frontier Foundation notes that current facial recognition systems tend to produce a disproportionately high number of false positives when identifying minorities. NBC's story also details how it can be tedious to impossible for private citizens to opt out of using their pictures in these databases.
Ring's video doorbells would have monitored neighborhoods for known sex offenders and those on "most wanted" lists and could then have automatically notified law enforcement. The idea was criticized as likely to target those unfairly deemed a threat and potentially even political activists.
The science behind facial recognition is certainly exciting, and the tech could lead to a safer and more personal smart home, but facial recognition could easily result in a loss of privacy, unjust profiling and violations of personal rights. While the impact of facial recognition is still being determined and debated, it's important to recognize that facial recognition is no longer some distant concept reserved for science fiction. For better or worse, facial recognition is here now and spreading quickly.
Check back throughout the month as CNET dives deeper into the implications of this developing technology.
Published March 18 at 5:00 a.m. PT Update, 3:00 p.m. PT.