Skip to content
Tech News
← Back to articles

Facial Recognition Is Spreading Everywhere

read original get Facial Recognition Doorbell → more articles
Why This Matters

Facial recognition technology is rapidly expanding into various sectors, raising concerns about accuracy, bias, and potential misuse. As its applications grow, understanding the risks of errors and biases becomes crucial for consumers and the tech industry alike to ensure responsible deployment and safeguard individual rights.

Key Takeaways

Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—and menacing—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.

Yet the story those photos can tell inevitably has errors. FRT makers, like those of any diagnostic technology, must balance two types of errors: false positives and false negatives. There are three possible outcomes.

Three Possible Outcomes a) identifies the suspect, since the two images are of the same person, according to the software. Success! b) matches another person in the footage with the suspect’s probe image. A false positive, coupled with sloppy verification, could put the wrong person behind bars and lets the real criminal escape justice. Brandon Palacio c) fails to find a match at all. The suspect may be evading cameras, but if cameras just have low-light or bad-angle images, this creates a false negative. This type of error might let a suspect off and raise the cost of the manhunt. Brandon Palacio

In best-case scenarios—such as comparing someone’s passport photo to a photo taken by a border agent—false-negative rates are around two in 1,000 and false positives are less than one in 1 million.

In the rare event you’re one of those false negatives, a border agent might ask you to show your passport and take a second look at your face. But as people ask more of the technology, more ambitious applications could lead to more catastrophic errors. Let’s say that police are searching for a suspect, and they’re comparing an image taken with a security camera with a previous “mug shot” of the suspect.

Training-data composition, differences in how sensors detect faces, and intrinsic differences between groups, such as age, all affect an algorithm’s performance. The United Kingdom estimated that its FRT exposed some groups, such as women and darker-skinned people, to risks of misidentification as high as two orders of magnitude greater than it did to others.

Less clear photographs are harder for FRT to process. iStock

What happens with photos of people who aren’t cooperating, or vendors that train algorithms on biased datasets, or field agents who demand a swift match from a huge dataset? Here, things get murky.

Consider a busy trade fair using FRT to check attendees against a database, or gallery, of images of the 10,000 registrants, for example. Even at 99.9 percent accuracy you’ll get about a dozen false positives or negatives, which may be worth the trade-off to the fair organizers. But if police start using something like that across a city of 1 million people, the number of potential victims of mistaken identity rises, as do the stakes.

What if we ask FRT to tell us if the government has ever recorded and stored an image of a given person? That’s what U.S. Immigration and Customs Enforcement agents have done since June 2025, using the Mobile Fortify app. The agency conducted more than 100,000 FRT searches in the first six months. The size of the potential gallery is at least 1.2 billion images.

... continue reading