United States Customs and Border Protection plans to spend $225,000 for one year of access to Clearview AIa facial recognition tool that compares photos against billions of images taken from the internet.
The agreement expands access to Clearview tools to the Border Patrol’s headquarters intelligence division (INTEL) and the National Targeting Center, units that collect and analyze data as part of what is CBP called? a concerted effort to “disrupt, undermine, and disrupt” people and networks deemed security threats.
The contract states that Clearview provides access to “more than 60+ billion publicly available images” and will be used for “tactical targeting” and “strategic counter-network analysis,” indicating that the service is intended to be incorporated into the daily intelligence work of analysts rather than reserved for isolated investigations. CBP said its intelligence units draw from a “variety of sources,” including commercially available tools and publicly available data, to identify people and map their connections for national security and immigration operations.
The agreement anticipates analysts handling sensitive personal data, including biometric identifiers such as facial images, and requires nondisclosure agreements for contractors with access. It does not specify what types of photos agents will upload, whether searches may include US citizens, or how long uploaded images or search results will be retained.
The Clearview contract lands as the Department of Homeland Security faces growing scrutiny over how facial recognition is used in federal law enforcement operations far from the border, including large-scale actions in US cities that have harmed US citizens. Civil liberties groups and lawmakers have questioned whether face-searching tools are deployed as routine intelligence infrastructure, rather than limited investigative aids, and whether the safeguards continue to expand.
Last week, Senator Ed Markey introduced law that would prevent ICE and CBP from using facial recognition technology entirely, citing concerns that biometric surveillance is being incorporated without clear limits, transparency, or public consent.
CBP did not immediately respond to questions about how Clearview will be integrated into its systems, what types of images agents are allowed to upload, and whether searches may include US citizens.
Clearview’s business model has attracted scrutiny because it relies on scraping photos from public websites at scale. Those images are converted into biometric templates without the knowledge or consent of the people photographed.
Clearview also appears in recent DHS releases inventory of artificial intelligencelinked to a CBP pilot that began in October 2025. The inventory entry involves a pilot of CBP’s Traveler Verification System, which conducts face-to-face comparisons at ports of entry and other border-related screenings.
CBP stated in its public privacy documentation that the Traveler Verification System does not use information from “commercial sources or publicly available data.” It is likely, at launch, that Clearview access will instead be tied to CBP’s Automated Targeting System, which links biometric galleries, watch lists, and enforcement records, including files tied to recent Immigration and Customs Enforcement operations in areas of the US far from any border.
Clearview AI did not immediately respond to a request for comment.
New test through the National Institute of Standards and Technologywhich evaluated Clearview AI among other vendors, found that face detection systems can perform well on “high-quality visa-like photos” but falter in less controlled settings. Images captured at border crossings “not originally intended for automated facial recognition” produce error rates that are “higher, often more than 20 percent, even with more accurate algorithms,” federal scientists said.
The test highlighted a central limitation of the technology: NIST found that face detection systems could not reduce false matches without also increasing the risk that the systems would fail to identify the right person.
As a result, NIST says agencies may operate the software in an “investigative” setting that returns a ranked list of candidates for human review rather than a confirmed match. If systems are configured to always return candidates, however, searching for people not already in the database will still generate “matches” for review. In those cases, the results are always 100 percent wrong.







