Share:

Face it – Biometrics Needs Consent

Airside Mobile Digital ID App Biometric Consent

As we adjust to post quarantine/vaccination life, I’ve been thinking about a few things that developed over the past year, especially biometrics. The Pew Charitable Trusts’ reported in September that Americans’ comfort level of sharing their biometrics and sensitive health data has risen to all-time highs, with 53% of people very or somewhat comfortable with facial matching. As this trend of sharing such sensitive data becomes mainstream, it is clear to me that privacy is a huge battleground for digital ID and biometrics.

Looking back a little farther, late last year, digital rights activist Anna Kuznetsova responded to a classified ad, posted by an individual claiming to have access to the Moscow police surveillance camera network, by sending $200 and a selfie. According to the Thomson Reuters Foundation, a few days later she received a complete report of her locations and movements around Moscow. This “service” was available to anyone looking to track an ex-spouse, a political rival, a would-be scam target, and quite frankly any person.

Think that’s just a Russian problem? Think again. Every day, your image is captured by systems that feed facial recognition platforms available to law enforcement agencies, corporations, and, increasingly, any motivated member of society.

For instance, with PimEyes, you can check which websites publish photos of you by running their facial recognition search engine through nearly a billion images. Meanwhile, law enforcement agencies in the U.S. and Canada have been using a similar service called Clearview.ai that scrapes billions of images from social media and the internet to identify criminal suspects.

You can choose not to participate in Facebook or Instagram, but you cannot stop friends from posting photos of you. I suppose you could avoid your friends, but you cannot easily avoid the 750,000+ surveillance cameras installed across the United States. Your image is being captured without your knowledge or consent every day. And, even more troubling, your image is increasingly being processed and matched without your knowledge or consent.

You might think you have nothing to hide, but that is irrelevant and has potentially significant consequences. Facial recognition systems that try and find matches through billions of images – often called “1 to Big N” – are not perfect. They can struggle with facial types that are underrepresented in the algorithm’s training data (i.e., minorities). They can struggle with deep fakes, where someone’s face or body has been digitally altered into an existing photo. They also can work perfectly, but the results can be exploited by bad actors, as was the case in Moscow.

Along the same lines of The Social Dilemma film, here’s another disturbing hypothetical: Let’s say I covet the nice mountain bike your neighbor just bought. Let’s also say I walk over to your neighbor’s garage and relieve him of the bike while manipulating the video feed from my smart doorbell to put your face on my body. Finally, I call the local police and tell them I caught the thief – namely, you. “But… but… but, that photo can’t be me,” you tell them.

I think we need to re-think how facial data, specifically biometrics, plays a role in our everyday lives and judicial system. We need to draw a sharp, legal distinction between biometric matching with consent versus passive collection systems. It is neither fair nor realistic to expect law enforcement, prosecutors, and judges to make expert determinations on the integrity of facial matches or even of the original photos themselves. Photo-altering technology has already far outstripped the digital inspection expertise of local, state, and even some Federal authorities. The cat-and-mouse game of hackers vs. verifiers is bound to accelerate even further in the coming years. Unfettered facial recognition systems that operate without consent, oversight, or other privacy protections will continue to result in arrests of the innocent, framing of the unwitting, and abuse of the unsuspecting.

Fortunately, the alarm has begun to go off here in the U.S. and in the EU as new privacy rules like GDPR and CCPA have pushed the implementation of consent for data sharing into regulatory requirements. Apple unveiled several new privacy features last summer that also underscore this trend, and other Big Tech companies have publicly called on Congress for new regulations. The U.S. Department of Homeland Security (DHS) is reevaluating how it uses biometrics in source level identity verification.

All these efforts are laudable, but not enough. We cannot put blind faith in video or photographic evidence. We must move faster. The future of AI is only as good as the data that feeds it and how it is fed. At Airside, we have put privacy at the center of our strategy and believe that we are the only company that can credibly deliver interoperability with compelling privacy protections. Consent is the key. We must find ways of demanding, providing, and securing consent for all types of identity data sharing, but especially for biometrics. We must insist on “1-to-1” or “1-to-few” matchings, protocols for auditing consent, and changing evidentiary standards.

We must be able to prove if/how we consent to share our biometrics. If we don’t, we face a future where these trends of abuses and surveillance will continue to spiral, and we will no longer recognize ourselves.

Scroll to Top

You've Subscribed!

Be sure to download the Airside App to start skipping lines today.

Help us democratize privacy, share with your friends.