The New Orleans Police Department secretly used AI for live, real-time facial recognition and tracking — a move that likely violated a municipal ordinance — according to an exclusive investigation by The Washington Post.
The Post described this as the first known instance of a widespread AI facial recognition program used by a police force in the US.
While there are no federal laws that regulate the use of AI by law enforcement, four states and 19 cities in other states forbid their police department from using facial recognition for live and/or automated tracking.
In its report, The Post noted that in 2022, the New Orleans City Council issued an ordinance that limited police to only using facial recognition for searching for particular suspects that had allegedly committed violent crimes. The New Orleans police’s use of live AI facial recognition appeared to violate this city ordinance.
NOLA police used a network of private cameras for AI surveillance
According to The Post, New Orleans police leveraged a network of more than 200 private security cameras from early 2023 to April of this year, when the program was suspended. This network was assembled by a former police officer who created a nonprofit called Project NOLA, a private crime prevention nonprofit and owns and manages most of the cameras.
The cameras were set up to use artificial intelligence to automatically search live video feeds for potential matches against a list of wanted suspects. When the software identified a potential match, it sent out automatic alerts to police officers who had the mobile app installed on their phones. The officers could then research the subjects, meet them at their location, and make an arrest.
The Post reported that New Orleans Police Superintendent Anne Kirkpatrick suspended the live AI facial recognition program in early April of this year, stating that these automated alerts may not be in compliance with the municipal ordinance. A formal investigation has now been launched to determine how many officers used the alerts, how many people were arrested because of the alerts, how often the facial matches were wrong, and whether the alerts violated the ordinance.
AI facial recognition allegedly used in 34 arrests
On Facebook, Project NOLA has claimed that its AI facial recognition and cameras were involved in at least 34 arrests. The Post noted however that it’s difficult to verify that number. Because Project NOLA is a private organization and does not have a formal contract with the city, it does not have to disclose a full report about what cases it was involved in.
Furthermore, most police officers did not disclose their use of AI tools in their police reports, obscuring the vital role the technology played in the arrests. The police department is supposed to make mandatory reports to the city council about its use of facial recognition technology, but none of these cases or arrests were ever included in the reports.
“This is the facial recognition technology nightmare scenario that we have been worried about,” said Nathan Freed Wessler, a deputy director with the ACLU’s Speech, Privacy, and Technology Project. “This is the government giving itself the power to track anyone — for that matter, everyone — as we go about our lives walking around in public.”