Is The Facial Recognition System Used in India Dependable?
VAPPALA BALACHANDRAN
Facial recognition is easy in Jason Bourne series. In such movies the object while walking by, even among large crowds, is recognized and matched with their database at the headquarters. Also, suspects in spy movies can be located anywhere in the world, even thousands of miles away, with artificial intelligence (AI) driven cameras.
In real life it is not that easy.
Facial Recognition research started from 1964 in USA for an intelligence agency by a team led by Woodrow Wilson Bledsoe, mathematician and computer scientist. Initially it involved manual matching of the facial characteristics assisted by computers. The difficulties then encountered in the 1960s over head rotation, tilt, angle, facial expression, skin and light variation continue to be problematic even in 21st century. It becomes more difficult in case of unruly crowds with fast and unpredictable movements.
The first time Facial Recognition Technology(FRT) was used in USA in a crowd was in January 2001 at Tampa, Florida for Super bowl XXXV among one lakh plus crowd. The FRT equipment was given free as an experiment by the contractors. The police claimed that they identified 19 wanted persons but none was charged. However this evoked a storm of protests from civil rights organizations which started calling it as “Racist Software”.
“RAND”, American think tank published a paper in May 2001 justifying FRT for detecting terrorists among a crowd. It analyzed only its legality and not its dependability. Rand argued that biometrics collection in a crowd did not violate the Fourth Amendment as it was not invasive. They quoted the US Congress authorization to the Justice Department in 1924 to collect fingerprints data from the states. By 2001 FBI had 219 million finger print data cards.
In July 2000 when President Bill Clinton signed an order to consolidate biometrics collection under the Defense Department after the deadly 1996 Khobar Towers bombing, US Embassy bombings and attack on USS Cole in October 2000. These incidents made the Defense Advanced Research Projects Agency (DARPA) to develop a “Human ID at a Distance” project for US$ 50 million.
The first major failure of FRT in a crowd was during the Boston Marathon bombings on April 15, 2013 at the packed finish line. 5,700 runners were yet to reach the final stretch when two explosions with 14 seconds interval took place. The attacks killed 3 and injured 264.
Unfortunately FBI’s much acclaimed facial recognition system failed to identify Tsarnaev brothers, one wearing White cap-the other with Black cap from street videos although they were on police record. FBI and Boston Police had to fall back on conventional methods like amateur video images, public appeals and surveillance on persons before finally locating them on 19th
The reasons for FRT failure in a crowd are many including skin colour. A 2018 paper jointly written by MIT Media Lab researcher Joy Buolamwini and Timnit Gebru of Microsoft Research found that skin colour resulted in incorrect matches.
They applied dermatologist approved Fitzpatrick skin type classification system on four sub groups: darker females, darker males, lighter females and lighter males. They found that all “classifiers” performed best for lighter individuals and males, not for others. Hence they recommended further studies: “Further work should explore intersectional error analysis of facial detection, identification and verification”.
American Bar Association’s (ABA) spring 2019 paper also said that FRT algorithms might not be very accurate in respect of certain demographics. It found that FRT worked better to aid police investigation in “controlled” and less crowded situations with less movement but not in complex environment like a crowded street. Its best use was in fraud investigation or check point verification.
Meanwhile the Congressional Black Caucus (CBC) wrote to Jeff Bezos on May 24, 2018 protesting against Amazon’s “Rekognition” software, used by several US law enforcement agencies, as they feared that their new software would lead to negative consequences on “African Americans, undocumented immigrants and protestors”. They said “that type of artificial intelligence” would lead a situation when “Communities of color are more heavily and aggressively policed than white communities”.
Taking this cue, American Civil Liberties Union (ACLU) published a sensational investigation report on July 26, 2018 that “Rekognition” incorrectly matched 28 members of Congress as arrested persons. The research paper MIT Media Lab/Microsoft was cited by ACLU to prove the inaccuracies of machine learning algorithms which tend to reach wrong conclusions “based on classes like race and gender”.
ACLU investigation used 25,000 publicly available “Arrest photos” and matched that data base with the public photos of “every current member of the Senate and House” using default match settings of Rekognition software. This brought out startling results confirming CBC’s fears that face recognition was less accurate for “darker skinned faces and women”.
Six members of the CBC including their “Civil Rights Legend” Congressman John Lewis from Georgia were misidentified. Rep. Lewis was matched with an arrested criminal. Amazon contested these findings saying that ACLU did not use correct settings. Other analysts affirmed that these were the default settings in the software, which was used by some police departments in training materials.
As a result several US cities like San Francisco, Oakland, Somerville and Massachusetts have banned FRT systems. Berkeley, Seattle and Nashville had passed “Community Control Over Police surveillance Laws” as proposed by ACLU.
Thus, even in developed countries serious doubt exists whether FRT, as is sold now, is a correct recognition even in static circumstances. Even Israel, our security role model, is not so sanguine. Omer Laviv of the Mer Security & Communications system, that markets Israeli company AnyVision’s products to law enforcement agencies around the world told the American National Public Radio(NPR) on August 22, 2019 that facial recognition technology is “a few decades away from being able to locate a suspect by scanning crowds in real time”.
Hence the claim that 1100 persons involved in Delhi riots including 300 from Uttar Pradesh were identified through “facial recognition software” is difficult to be believed.
The writer is a former Special Secretary, Cabinet Secretariat