The NGO has shared a stream of examples of how the software amplifies racist policing and threatens the right to protest — and called for a global ban on the tech. The Ban the Scan campaign was launched on Tuesday in New York City, where facial recognition has been used 22,000 since 2017. Amnesty notes that the software is often prone to errors. But even when it “works,” it can exacerbate discriminatory policing, violate our privacy, and threaten our rights to peaceful assembly and freedom of expression. [Read: How this company leveraged AI to become the Netflix of Finland] The human rights group wants a total ban on the use of facial recognition for government surveillance and a block on any exports of the systems. As part of the campaign, Amnesty is producing a crowd-sourced map of all the places in the Big Apple where the cameras are scanning faces. In May, volunteers will start using a tool to geolocate the devices across the city. The organization also developing a tool for filing Freedom of Information Law requests to see where the tech is used in their communities. Matt Mahmoudi, an AI and human rights researcher at Amnesty, said the tech is turning our identities against us: Amnesty says the NYPD has used the tech to harass numerous residents of the city, including Derrick Ingram, co-founder of the social justice organization Warriors in the Garden. In August 2020, the force used facial recognition to track down the activist, who allegedly assaulted a police officer by shouting into a megaphone at a protest. Dozens of cops tried to force their way into Ingram’s apartment and arrest him. The 27-year-old said they used a battering ram on his door, leaving a huge dent — and refused to provide a search warrant. “We’re being specifically targeted with this technology because of what we’re protesting, and because we’re trying to deconstruct a system that the police are a part of,” said Ingram.