Home Technology Explained | Delhi Police’s use of facial recognition technology

Explained | Delhi Police’s use of facial recognition technology

0
Explained | Delhi Police’s use of facial recognition technology

When was FRT first introduced in Delhi? What are the concerns with using the technology on a mass scale?

When was FRT first introduced in Delhi? What are the concerns with using the technology on a mass scale?

The story so far: Right to Information (RTI) responses received by the Internet Freedom Foundation, a New-Delhi based digital rights organisation, reveal that the Delhi Police treats matches of above 80% similarity generated by its facial recognition technology (FRT) system as positive results.

Why is the Delhi Police using facial recognition technology?

The Delhi Police first obtained FRT for the purpose of tracing and identifying missing children. According to RTI responses received from the Delhi Police, the procurement was authorised as per a 2018 direction of the Delhi High Court in Sadhan Haldar vs NCT of Delhi. However, in 2018 itself, the Delhi Police submitted in the Delhi High Court that the accuracy of the technology procured by them was only 2% and “not good”.

Things took a turn after multiple reports came out that the Delhi Police was using FRT to surveil the anti-CAA protests in 2019. In 2020, the Delhi Police stated in an RTI response that, though they obtained FRT as per the Sadhan Haldar direction which related specifically to finding missing children, they were using FRT for police investigations. The widening of the purpose for FRT use clearly demonstrates an instance of ‘function creep’ wherein a technology or system gradually widens its scope from its original purpose to encompass and fulfil wider functions. As per available information, the Delhi Police has consequently used FRT for investigation purposes and also specifically during the 2020 northeast Delhi riots, the 2021 Red Fort violence, and the 2022 Jahangirpuri riots.

What is facial recognition?

Facial recognition is an algorithm-based technology which creates a digital map of the face by identifying and mapping an individual’s facial features, which it then matches against the database to which it has access. It can be used for two purposes: firstly, 1:1 verification of identity wherein the facial map is obtained for the purpose of matching it against the person’s photograph on a database to authenticate their identity. For example, 1:1 verification is used to unlock phones. However, increasingly it is being used to provide access to any benefits or government schemes. Secondly, there is the 1:n identification of identity wherein the facial map is obtained from a photograph or video and then matched against the entire database to identify the person in the photograph or video. Law enforcement agencies such as the Delhi Police usually procure FRT for 1:n identification.

For 1:n identification, FRT generates a probability or a match score between the suspect who is to be identified and the available database of identified criminals. A list of possible matches are generated on the basis of their likelihood to be the correct match with corresponding match scores. However, ultimately it is a human analyst who selects the final probable match from the list of matches generated by FRT. According to Internet Freedom Foundation’s Project Panoptic, which tracks the spread of FRT in India, there are at least 124 government authorised FRT projects in the country.

Why is the use of FRT harmful?

India has seen the rapid deployment of FRT in recent years, both by the Union and State governments, without putting in place any law to regulate their use. The use of FRT presents two issues: issues related to misidentification due to inaccuracy of the technology and issues related to mass surveillance due to misuse of the technology. Extensive research into the technology has revealed that its accuracy rates fall starkly based on race and gender. This can result in a false positive, where a person is misidentified as someone else, or a false negative where a person is not verified as themselves. Cases of a false positive result can lead to bias against the individual who has been misidentified. In 2018, the American Civil Liberties Union revealed that Amazon’s facial recognition technology, Rekognition, incorrectly identified 28 Members of Congress as people who have been arrested for a crime. Of the 28, a disproportionate number were people of colour. Also in 2018, researchers Joy Buolamwini and Timnit Gebru found that facial recognition systems had higher error rates while identifying women and people of colour, with the error rate being the highest while identifying women of colour. The use of this technology by law enforcement authorities has already led to three people in the U.S. being wrongfully arrested. On the other hand, cases of false negative results can lead to exclusion of the individual from accessing essential schemes which may use FRT as means of providing access. One example of such exclusion is the failure of the biometric based authentication under Aadhaar which has led to many people being excluded from receiving essential government services which in turn has led to starvation deaths.

However, even if accurate, this technology can result in irreversible harm as it can be used as a tool to facilitate state sponsored mass surveillance. At present, India does not have a data protection law or a FRT specific regulation to protect against misuse. In such a legal vacuum, there are no safeguards to ensure that authorities use FRT only for the purposes that they have been authorised to, as is the case with the Delhi Police. FRT can enable the constant surveillance of an individual resulting in the violation of their fundamental right to privacy.

What did the 2022 RTI responses by Delhi Police reveal?

The RTI responses dated July 25, 2022 were shared by the Delhi Police after Internet Freedom Foundation filed an appeal before the Central Information Commission for obtaining the information after being denied multiple times by the Delhi Police. In their response, the Delhi Police has revealed that matches above 80% similarity are treated as positive results while matches below 80% similarity are treated as false positive results which require additional “corroborative evidence”. It is unclear why 80% has been chosen as the threshold between positive and false positive. There is no justification provided to support the Delhi Police’s assertion that an above 80% match is sufficient to assume the results are correct. Secondly, the categorisation of below 80% results as false positive instead of negative shows that the Delhi Police may still further investigate below 80% results. Thus, people who share familial facial features, such as in extended families or communities, could end up being targeted. This could result in targeting of communities who have been historically overpoliced and have faced discrimination at the hands of law enforcement authorities.

The responses also mention that the Delhi Police is matching the photographs/videos against photographs collected under Section three and four of the Identification of Prisoners Act, 1920, which has now been replaced by the Criminal Procedure (Identification) Act, 2022. This Act allows for wider categories of data to be collected from a wider section of people, i.e., “convicts and other persons for the purposes of identification and investigation of criminal matters”. It is feared that the Act will lead to overbroad collection of personal data in violation of internationally recognised best practices for the collection and processing of data. This revelation raises multiple concerns as the use of facial recognition can lead to wrongful arrests and mass surveillance resulting in privacy violations. Delhi is not the only city where such surveillance is ongoing. Multiple cities, including Kolkata, Bengaluru, Hyderabad, Ahmedabad, and Lucknow are rolling out “Safe City” programmes which implement surveillance infrastructures to reduce gender-based violence, in the absence of any regulatory legal frameworks which would act as safeguards.

Anushka Jain is an Associate Policy Counsel and Gyan Prakash Tripathi is a Policy Trainee at Internet Freedom Foundation, New Delhi

THE GIST

RTI responses received by the Internet Freedom Foundation reveal that the Delhi Police treats matches of above 80% similarity generated by its facial recognition technology system as positive results. Facial recognition is an algorithm based technology which creates a digital map of the face by identifying and mapping an individual’s facial features, which it then matches against the database to which it has access.

The Delhi Police first obtained FRT for the purpose of tracing and identifying missing children as per the direction of the Delhi High Court in Sadhan Haldar vs NCT of Delhi

Extensive research into FRT has revealed that its accuracy rates fall starkly based on race and gender. This can result in a false positive, where a person is misidentified as someone else, or a false negative where a person is not verified as themselves. The technology can also be used as a tool to facilitate state sponsored mass surveillance.

LEAVE A REPLY

Please enter your comment!
Please enter your name here