Detroit Police chief James Craig has acknowledged that AI-powered face recognition doesn’t work the vast majority of times.
“If we would use the software only [for subject identification], we would not solve the case 95-97 percent of the time,” Craig said. “If we were just to use the technology by itself to identify someone, I would say 96 percent of the time it would misidentify.”
Craig’s comments arrive just days after the ACLU (American Civil Liberties Union) lodged a complaint against the Detroit police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.
Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.
Current AI algorithms are known to have a racism issue. Extensive studies have repeatedly shown that facial recognition algorithms are almost 100 percent accurate when used on white males, but have serious problems when it comes to darker skin colours and the fairer sex.
This racism issue was shown again this week after an AI designed to upscale blurry photos, such as those often taken from security cameras, was applied to a variety of people from the BAME communities.
Here’s a particularly famous one:
🤔🤔🤔 pic.twitter.com/LG2cimkCFm
— Chicken3gg (@Chicken3gg) June 20, 2020
And some other examples:
Last week, Boston followed in the footsteps of an increasing number of cities like San Francisco, Oakland, and California in banning facial recognition technology over human rights concerns.
“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.
Over the other side of the pond, facial recognition tests in the UK so far have also been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified. A follow-up trial the following year led to no legitimate matches but 35 false positives.
An independent report into the Met Police’s facial recognition trials, conducted last year by Professor Peter Fussey and Dr Daragh Murray, concluded that it was only verifiably accurate in just 19 percent of cases.
The next chilling step for AI in surveillance is using it to predict crime. Following news of an imminent publication called ‘A Deep Neural Network Model to Predict Criminality Using Image Processing,’ over 1000 experts signed an open letter last week opposing the use of AI for such purposes.
“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warned the letter’s authors.
The acknowledgement from Detroit’s police chief that current facial recognition technologies do not work in around 96 percent of cases should be reason enough to halt its use, especially for law enforcement, at least until serious improvements are made.
(Photo by Joshua Hoehne on Unsplash)
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.