The Transportation Security Administration is planning to test a face recognition system that could be used on all domestic U.S. fliers, according to a document the agency released today. That would represent a significant expansion of face recognition in daily life.

In the test, which will occur at McCarran airport in Las Vegas, passengers entering the TSA security area will be photographed and a face recognition algorithm applied in an attempt to tell whether they match the photograph on their IDs. The system adds face recognition to a technology that the TSA has been working on for years called Credential Authentication Technology, which scans a passenger’s driver’s license or other ID document and attempts to automatically determine whether it is authentic.

If the TSA decides that the system works well, we can assume the agency will use it to replace human document checkers throughout the domestic aviation system. This program is part of the TSA’s sweeping vision to deploy face surveillance at the nation’s airports, and comes on the heels of a similar deployment by CBP at the gates of departing international flights. If widely deployed, the TSA's program would (as we said of the CBP program) socialize people to accept face recognition and normalize the technology, inevitably be subject to mission creep, and expose people to the judgments of unreliable and biased algorithms.

For purposes of this test, the TSA says it will only run the system on passengers who volunteer to participate. “The passenger’s facial image, along with certain biographic information from the passenger’s identity document, will be collected by TSA and retained for subsequent qualitative and quantitative analysis” by DHS technical experts. Names and identification numbers will be obfuscated before the data is transferred for analysis, the agency says, and the data will be deleted within 180 days.

But the real question is what data will be collected and how will it be handled if this technology moves beyond tests? Will passengers be able to opt out? Will the agency want to collect and store passengers’ photographs to improve the training of their face recognition algorithms? Will passengers’ photos be run against photographic watch lists, exposing every passenger to the risk of being misidentified as a serious terrorist or other criminal every time they fly?

And what are the implications of introducing a technology for the automated checking of IDs? Like many airport security measures, such technology may very well expand beyond the airport and into daily life. When ID checks can be done by machines that are much cheaper and easier to deploy than human guards, will we find ourselves being subject to ever-more-frequent checks? When ID checks become cheap and easily scalable they will inevitably be over-used, as we have seen happen with other surveillance technologies.

Finally, as I have explained in depth before, one of the biggest problems with this use of face recognition is that it represents an ever-growing investment by the TSA in identity-based security — security based on knowing more and more information about people and trying to use that information to assess their “risk to aviation.” The TSA should instead be focused on making sure that nobody — no matter who they are — can bring guns or explosives onto aircraft. Face recognition is an investment that is bad for security and that is likely to have bad side effects on our society to boot.

Jay Stanley, Senior Policy Analyst, ACLU Speech, Privacy, and Technology Project