“The IRS takes taxpayer privacy and security seriously, and we understand the concerns that have been raised,” IRS commissioner Charles Rettig said in a statement. “Everyone should feel comfortable with how their personal information is secured.”
United States Senate Committee on Finance
After facing blowback from both sides of the aisle, the IRS announced today that they will not require the use of ID.me’s facial recognition technology to verify the identity of U.S. taxpayers for online tax account access. This decision allows time for the IRS to take a step back and re-evaluate the best path forward on identity verification. However, it would be a serious mistake to return to “business as usual” and stick our collective heads in the digital sand.
According to the FTC, identity theft rose by over 45% in 2020, with over a third of reported cases related to interactions with government agencies. Clearly, what worked in the past for identity verification is not working now. Technology is evolving faster than ever before in history, and that evolution is creating new and more broadly accessible tools for identity theft and spoofing. High-quality forgeries of government-issued IDs can be obtained by dime-a-dozen criminals. Identity quizzes (known in the industry as knowledge-based authentication) have been devalued by the nefarious harvesting of social media posts and often result in lower than 60% authentication rates. If you yourself haven’t had a password hacked or a credit card number stolen, then you certainly have friends and family who have. The IRS is right to be concerned, right to be proactive in protecting citizens, and right to be looking to new approaches that are outside traditional processes.
However, the facial recognition problem goes deeper than just an awkward and, at times, inconvenient verification process. The rules around collecting biometrics for one purpose and then using them for another application (authorized or unauthorized) are far from clear. For example, gray-area companies are scraping social media photos and selling facial recognition capabilities to both law enforcement and the general public. While NIST continues to develop meaningful standards for technology performance, the U.S. lacks a true overall framework and policy architecture for facial recognition and biometrics by Federal agencies.
The problem for both government agencies and the private sector is avoiding the misuse of facial recognition data, whether through theft, unclear guidelines, or “mission creep.” Centralized databases that store facial recognition imagery or templates are attractive targets for cyber-criminals and create a temptation to use the data in ways that the individual never anticipated. Lack of understanding around technology pitfalls, “office politics” motivations, and even simple human errors haunt the dim corners of any large organization, including the Federal government. Keeping in mind some of the spectacular data breaches affecting various Federal and state agencies over the past few years, we cannot afford a “Wild West” approach to this powerful new technology.
Facial recognition is not going away. Over the next few years, we will encounter it not only in our interactions with the government but also in our offices, at the airport, during online shopping, and even to start our cars. What’s needed is an effective policy framework for biometrics that includes, at a minimum, three elements.
First, organizations looking to implement facial recognition for identity verification need to choose solutions that are architected with privacy-by-design as the core principle. With biometrics, the old internet model of harvesting data and monetizing it has to be replaced – it’s time we all put limits on “being the product.”
Second, the administration and Congress should work together to develop clear, pragmatic rules on how Federal agencies can use biometrics for both identity verification and for investigative purposes – and draw a sharp line preventing data between the two, and requiring that the submission of personal information, including biometrics, to the federal government should be governed by clear statements of purpose, and prohibit agencies’ ability to subsequently alter or deviate from the initially stated purpose.
Finally, there must be widely accessible, rapid-resolution redress processes that can handle exception cases and service issues at scale, especially on the less privileged side of the digital divide.
These elements are rudimentary and far from comprehensive, but they must be the foundation for a practical path forward. Ignoring the issue or calling for an outright ban on biometrics is dangerously naive. No one should have to choose between convenience and privacy in the digital exchange of their personal information. It is possible to protect and share sensitive data with a decentralized architecture that is both person-centric and scalable in the U.S. and far beyond. The IRS has done us all a favor by jumpstarting the conversation around facial recognition. Let’s not waste this opportunity.