Police in New Orleans Found to Use Real-Time Facial Recognition, Crossing Surveillance Red Line
Posted on May 27, 2025
A new report from The Washington Post reveals that the New Orleans Police department has made use of a real-time facial recognition-equipped camera network for years. While law enforcement across the country has begun utilizing facial recognition in the course of investigations, it has heretofore consisted of submitting still images of suspects to analysis. By contrast, the facial recognition in use in New Orleans appears to be the first instance of law enforcement reliance on facial recognition applied to live camera video feeds.
The most unsettling element is the manner in which New Orleans law enforcement launders its facial recognition investigatory practices by indirectly engaging a third-party with no direct government affiliation. The camera network consists of privately-owned devices, the analysis is conducted by a local nonprofit organization, and officers make use of a freely offered service voluntarily and individually, without the explicit involvement of the New Orleans Police Department.
The entity at the heart of the effort, Project NOLA, receives nearly a million dollars annually, which it uses to install cameras and apply facial recognition analysis to live feeds. Additionally, many private businesses contribute their live camera footage as well. In total, this grants Project NOLA access to upwards of 5000 cameras around the city. As stated to The Washington Post in its reporting, the nonprofit inputs a list of wanted suspects, and then scans everyone who crosses one of its camera lenses for match against this list. Project NOLA states that officers are unable to submit their own searches. To avail themselves of the organization’s free public service, officers instead simply install a mobile app and receive notifications in real time when a suspect shows up on camera.
As Project NOLA is not a government entity, and it only interfaces with law enforcement obliquely as (ostensibly anonymous) users of its app, there is no guarantee of transparency surrounding any of its activities. The operation was only discovered by a review of police records which allude to its existence. The dynamic between Project NOLA and NOPD exemplifies government’s increasing reliance on third-parties to farm out activities that would otherwise violate the Americans’ civil liberties. As in the “Third-Party Doctrine” as established by the Smith v. Maryland case, this practice is premised on the idea that nonprofit entities can own and operate security cameras at their own discretion, and run any software on the captured footage that they choose. What The Washington Post has revealed in New Orleans presages a future of complete integration of government and private entities, each sidestepping the limitations of the other. Indeed, this merely continues a larger trend of law enforcement skirting facial recognition limits which CCDBR has remarked on.
There is much worthy of concern hidden behind the organization’s opacity. Firstly, it is unknown what facial recognition software is employed in analysis, nor whether there is any oversight or validation by human analysts before a match is sent to app users.
Secondly, the distribution channel for alerts on suspects’ real-time location, a mobile application, also begets pressing unanswerable questions. It is unclear if there is any vetting as to who may download and use the application. Furthermore, the development of the application is totally unspecified. Review of the software is vital to determine what data is collected from users’ (i.e. officers’) phones, whether adequate safeguards against data leakage or hacking were implemented in the code, whether developers inserted a backdoor, whether the app was compromised by a third-party that then inserted its own backdoor, and innumerable other unsettling possible scenarios.
Finally, that Project NOLA functions outside of government scrutiny allows it to dodge any vetting of the equipment it uses. Whereas government agencies are barred from using goods which may be compromised by foreign actors such as the Chinese government, no such limitations bind nonprofit organizations. The investigation suggests that at least some of Project NOLA’s equipment is of Chinese origin, opening the door to the theft of data on Americans’ whereabouts.
Taken in its totality, this paints an alarming picture of how much a small group of private citizens can learn about their neighbors with a few cameras, some software, and ample donation money.
Police use of this openly disseminated facial recognition service would appear to violate the spirit, if not the letter, of city rules against the technology’s use by police. The relevant ordinance only permits law enforcement to employ facial recognition for searching for individual suspects in violent crime cases. Furthermore, all photos must be submitted to a dedicated government facility, and at least two qualified personnel must concur with the facial recognition software’s analysis before the fusion center may return a match to the querying law enforcement officer.
None of these guardrails govern Project NOLA. The organization has boasted on social media of its facilitation of arrests for nonviolent offenses. It allows live video feed rather than the ordinance’s prescribed historical still photo, and the organization makes no attestation of the accuracy.
To her credit, New Orleans Police Chief Anne Kirkpatrick has proactively ordered a review—rather than wait for lawsuits to land on her desk—to determine whether this practice is consistent with city ordinances, stating “We’re going to do what the ordinance says and the policies say, and if we find that we’re outside of those things, we’re going to stop it, correct it and get within the boundaries of the ordinance.” However, she is simultaneously pushing for the city to run its own facial recognition-equipped camera network.
Even if Kirkpatrick determines that incorporating Project NOLA’s offerings into investigation and prosecution is unlawful, and bars its use, the Rubicon has already been crossed. Nathan Wessler of the ACLU makes no exaggeration in saying “This is the facial recognition technology nightmare scenario that we have been worried about”. The only viable remedy would seem to be even stricter regulations on facial recognition use, prohibiting law enforcement from incorporating any investigatory evidence derived from facial recognition not conducted by a duly authorized government entity. Hopefully, this incident will serve as the object lesson that impels more prescriptive regulations to be written and passed into law.
You can read the full report from The Washington Post here.



