< Previous | Contents | Next >

CHAPTER 13

Use of AI and technology in

law enforcement

Vrinda Bhandari and Anushka Jain

“We often assume machines are neutral, but they aren’t.”

– Joy Buolamwini, Computer Scientist, founder of the Algorithmic Justice League and a poet of code.

SUMMARY

India has seen a surge in the use of AI-based tools to help in “smart policing”;

Various studies and reports indicate that the impact of CCTV cameras and facial recognition technology on the reduction in crime is inconclusive;

It is a cause for concern that there are historical and representational biases in the datasets that feed predictive policing and FRT models;

No regulation in India governs the development, sale and deployment of AI technology.

Introduction

Technologies were meant to ease and streamline police work. However, improve- ments in technology, especially the use of artificial intelligence (AI), have often come at the cost of civil liberties, enabling “wider” (covering a broader swathe of the population) and “deeper” (“allowing for more intrusive information col- lection and profiling”) surveillance. 1 Close to 50 per cent of Indian states are already using or planning to use AI in law enforcement. These include the states of Rajasthan, Maharashtra, Delhi, Jharkhand, Punjab, West Bengal, Telangana, Assam, Haryana, Uttarakhand, Tamil Nadu, Uttar Pradesh, Odisha, Kerala, Gujarat and Andhra Pradesh. 2 Telangana in particular, has been experimenting and deploying different forms of AI in law enforcement, often with complete disregard for human rights. Recently, a Forbes India report claimed Delhi to be the most surveilled city in the world (based on the number of surveillance cameras per square mile), followed by London and Chennai. 3

The question to ask here is whether these technologies are really helping law enforcement in efficiently tracking, identifying, and arresting suspects. While developers of these technologies usually market their products as the solution to all law enforcement issues, critics of the increasing adoption of technology into policing have identified this as one more instance of “techno-solutionism”. Evgeny Morozov, in his book “To Save Everything, Click Here” first coined the term “techno-solutionism” to refer to the idea that the right technology can solve any real-world problem. 4 The underlying assumption here is that technology is free of bias and inaccuracy, and thus, all decisions made by the technology will be objective and correct. However, research has shown that this is not always the case. AI technologies are usually developed through machine learning systems, where the system “learns” to identify patterns and rules by analysing existing data which is usually biased. Thus, these systems inculcate the biases present in existing datasets which are then presented through the system which is assumed to be objective. 5

This chapter examines technology interventions that use AI in law enforce- ment, with a specific focus on predictive policing and the use of facial recognition technology (FRT) and emotional recognition technology (ERT). Part II briefly describes the state of affairs concerning the deployment of these three technolo- gies, including concerns regarding accuracy and bias. Part III examines the role of the private sector and its collaboration with law enforcement agencies in rolling out AI tools. Part IV analyses concerns regarding the loss of privacy and the rise of mass surveillance through the use and misuse of AI in law enforcement. Part V concludes with our observations on the way forward.

Understanding the Lay of the Land: The State of Predictive Policing, FRT and ERT

Law enforcement agencies have highlighted the benefits of the use of AI in their work, specifically through facial recognition, CCTV cameras and predictive policing. There has been a surge in the use of AI-based tools to help in “smart policing”, policing protests, identifying faces in a crowd, using and training drones to police crowds and monitor behaviour and digitising criminal records. 6 In an article posted on the Bihar government’s website, Dr Kamal Kishore Singh, IPS, ADG notes that AI is “making an impact in key areas like surveillance, crime prevention, and crime-solving. With enhanced imaging technologies and object and facial recognition, AI reduces the need for labour-intensive tasks, freeing officers to handle more complex activities. AI also may capture crim- inals that would otherwise go free, and solve crimes that would otherwise go undetected”. 7

Thus, there is a clear perceived benefit in the use of AI tools for law enforce- ment purposes, particularly predictive policing, FRT and ERT discussed below. This is based on the machine learning capability of AI, its ability to recognise patterns and classify objects. 8

A. Predictive policing

In Steven Spielberg’s 2002 film Minority Report , Tom Cruise stars as the Chief of the “PreCrime” Department, which is a specialised police department that uses information or “foreknowledge” generated from three psychics to predict and prevent crime. However, things take a turn for the worse when Cruise himself is predicted to commit a murder. The film tries to examine the classic debate between a human being’s free will and the philosophy of determinism, which, to simplify, states that all actions are predetermined based on certain causes. This tussle is captured perfectly by the debate around predictive policing, where experts argue that predicting crime based on historical data fails to acknowledge the concept of a human’s free will, where a person predicted to commit a crime may actually not commit it. 9 Therefore any police action based on a prediction of crime would lead to a violation of the rights of the person predicted to commit the crime.

Predictive policing is the collection and analysis of data about previous crimes for the identification and statistical prediction of individuals or geospatial areas with an increased probability of criminal activity. This is for the development of policing intervention and prevention strategies and tactics. 10 It involves feeding a large amount of data to advanced algorithms to identify recurring patterns in criminal behaviour. Here, the assumption is that crime occurs or criminals operate in familiar geospatial areas (or comfort zones), where they have success- fully committed a crime before, which allows big data systems to predict their recurrence.

Predictive policing models can be divided into four categories: (a) Methods for predicting crime; (b) methods for predicting offender’s identities; (c) methods for predicting perpetrator’s identities by creating profiles that accurately match likely offenders; and (d) methods for predicting victims of crime. 11

Predictive policing models therefore predict the occurrence of criminal behaviour based on the historical and statistical datasets fed into them. 12 Further action taken is based on these predictions, in the absence of any actual criminal incident to which the police may have mounted a traditional response. Thus, the focus is wholly on prevention of crime. The use of predictive policing models is claimed to allow police departments to allocate resources more efficiently, as it allows them to identify geographical areas where incidences of crime may be higher. 13

Following global trends, the use of predictive policing by state departments in India has been on the rise. Predictive policing has been used primarily by Delhi Police in the Crime Mapping, Analytics, and Predictive System (CMAPS) to analyse existing data from phone calls to police hotlines such as 100 for live spatial hotspot mapping of crime, criminal behaviour patterns and suspect analysis. 14 The idea is to use the data available with the Crime and Criminal Tracking Network System (CCTNS) to connect police stations across India, to improve access to data related to First Information Report (FIR) registration, investigation and chargesheets. Similar initiatives are being discussed in Madhya Pradesh, 15 Telangana, 16 Himachal Pradesh, 17 and Jharkhand. 18

Accuracy concerns

A survey by Analytics Insight shows that a majority of its 251 respondents believe that the use of AI will lead to increased objectivity – 52 per cent of them say AI “may help make policing “fairer”, while 40 per cent say that advanced AI technology would eliminate bias and make policing fairer. 19

However, such responses seem to misconstrue the nature of bias that is inherent in the use of AI for law enforcement, particularly in predictive policing. Historical criminal data, the bedrock of all predictive policing models, do not necessarily reflect on who is more likely to commit a crime. Instead, it is an indicator of areas and communities that are more policed than others. 20

Predictive policing can, in particular, institutionalise discrimination, especially when used to predict the identity of the offender and its use against de-notified criminal tribes.

B. Facial recognition technology

FRT or automated facial recognition technology (AFRT) uses computational algorithms to identify or verify individuals by extracting their unique facial fea- tures to create facial maps and analysing them against pre-existing datasets to determine the probability or likelihood of a match. 21 When used for verification or authentication of identity, FRT can confirm that a person is who he or she claims to be (1:1). The use of FRT for verification is usually meant for access to government schemes such as pension and Aadhaar or for recording attendance in office or school.

When used for identification, or security and surveillance purposes, FRT is a tool to identify the person who may be a suspect or victim in a security or crime incident (1:many). Such use by police and intelligence agencies raises particular mass surveillance concerns.

In order to use FRT for identification, police and security agencies need access to large databases with biometric information, especially the facial signature of a person. This information can be obtained from any database (including social media) containing photographs as a data category. Then, the facial signature of the suspect or victim is matched against every facial signature on the database. The FRT then assigns a probability match score or a confidence score to the user. This score lists the multiple possible matches generated and listed based on their likelihood to be the correct match with corresponding confidence scores. The final decision on the matter is made by the user, who is most likely a police analyst or officer. The accuracy of confidence scores depends on factors such as camera quality, light, distance, database size, algorithm and the suspect’s race and gender. 22

Currently, the Internet Freedom Foundation’s Panoptic Tracker, which tracks FRT systems deployed by governments in India, is tracking 168 FRT systems across the country. 23 Of these, at least 43 systems are being used for the pur- pose of security and surveillance at various levels of the government. In 2019, the Ministry of Home Affairs (MHA) unveiled its plan to develop the National Automated Facial Recognition System (NAFRS) to identify criminals, which will be accessed by every police department in the country. 24 The system will use ‘scene of crime’ images and videos obtained from CCTV cameras, newspapers and police raids and match these to existing records under the CCTNS. The documentation released by MHA does not suggest that it has considered the ethical and legal implications of using facial recognition, or even the privacy concerns raised by civil society. This is particularly concerning, since governments around the world are putting in place bans or moratoriums, or are strictly regulating the use of FRT in law enforcement. 25

However, at least 20 state police departments in India are already developing or deploying FRT in their jurisdictions, including Delhi Police, Kolkata Police, Hyderabad Police, Punjab Police and Bengaluru Police. 26 The Delhi Police first acquired FRT to track and reunite missing children, which was authorised by an order of the High Court of Delhi in the Sadhan Haldar v. NCT of Delhi . 27 However, soon after reports of the Delhi Police using FRT for wider criminal investigation purposes started emerging. It confirmed the same in a Right to Information application response. 28 The Delhi Police then claimed that it had employed FRT during the Delhi riots of 2020. According to a statement made by Union Home Minister Amit Shah in Rajya Sabha, FRT was used to recognise over 1,900 faces. 29 Further, during a press conference the then Police Commissioner SN Shrivastava noted that “137 persons were identified through our facial rec- ognition system. The FRT was matched with police criminal records, and many accused were caught. Over 94 accused were identified and caught with the help of their driving licence photos and other information.” 30

Similarly, Hyderabad Police is also deploying FRT at an unprecedented scale. 31 It is building an AI-enabled Command and Control Centre, which will have the capacity to process data from up to 6,00,000 CCTV cameras at once. Seventy prisons in Uttar Pradesh deployed an AI-enabled video analytics platform ‘Jarvis’. 32 It monitors real-time footage from CCTV cameras across a vast network and then flags any segment that looks to contain unlawful activity. The platform runs real-time video data analysis from over 700 cameras with 24/7 data feed.

There is no overarching governing framework or law that authorises the use of FRT either at the central or state level nor is there regulatory oversight. In addition, FRT is being deployed at a mass scale across cities in India and without a data protection law in place, it leaves citizens with little or no protection against the surveillance state. At the same time, concerns about accuracy and bias abound, just like in the case of predictive policing.

Accuracy concerns

The use of CCTVs and FRT has been highlighted by state governments as important measures in maintaining law and order and improving public safety, particularly for women and children. 33 However, many studies conducted across the world demonstrate a lack of correlation/causation between the installation of CCTVs and any significant crime reduction. 34 For instance, a 2007 study evaluat- ing the impact of installing CCTVs in Cambridge City Centre in the UK found that CCTV “did not affect crime according to survey data, and an undesirable effect on crime according to police records. It is suggested that CCTV may have had no effect on crime in reality but may have caused increased reporting to and/ or recording by the police.” 35 Based on a meta-analysis of various studies and reports, it has been found that the impact of CCTV cameras and FRT on crime reduction is inconclusive, or mixed at best. 36

From the Indian perspective, it is important to note that – to the best of our knowledge – there does not seem to be any study that audits the efficacy or effi- ciency of CCTVs in any Indian state or empirically analyses its impact on crime numbers. At the same time, it is clear that FRT has its own concerns when it comes to accuracy.

Presently, no FRT system in the world is completely accurate, nor is it possible to have such a system. Inaccuracy in an FRT system can manifest in two possible outcomes.

The first relates to false negatives. A false negative occurs when there is a failure to identify or associate one subject in separate images and/or videos. It leads to a person not being identified as themselves and can result in issues of exclusion from access to government schemes or benefits. 37 In the welfare context, a false negative would imply that an FRT system has failed to recognise a person from their 10-year-old Aadhaar photograph. In the law enforcement context, it would result in the actual convict not being identified through the use of FRT, leading the police to potentially focus their efforts on false leads.

The second inaccuracy that results from the use of FRT is false positives . A false positive occurs when there is an incorrect identification (misidentification) or association between distinct subjects in separate images and/or videos. It leads to a person being misidentified as someone that they are not and can arise out of, and further result in, discriminatory action against certain marginalised groups of society. 38 For instance, when an FRT system being used by the police misidentifies an individual as the suspect in a criminal investigation, the resulting false positive can lead to wrongful arrest. As Spinelli notes, “When human bias is ingrained in face processing algorithms, systematic discrimination can be silently implemented into automated decision-making procedures, threatening diversity, and encroach- ing fundamental rights.” 39

In this chapter, we are mainly concerned with false positive results in FRT systems used by the police. A false positive or misidentification of a suspect can easily derail a criminal investigation. Not only will this lead to a delay in justice with the actual culprit getting away, but also result in the civil rights violation of the innocent person misidentified by FRT.

Studies in the United States have shown that the rate of inaccuracy in FRT systems increases due to factors such as race 40 and gender. 41 Even the FRT systems of the US Federal Bureau of Investigation (FBI) have an accuracy rate of 86 per cent. 42

In India, we do not have any details on the accuracy of FRT systems, partly due to the lack of transparency by government authorities. Right to Information (RTI) requests filed with the Delhi and the Kolkata police by one of the authors seeking information on the use of FRT systems were rejected on the grounds of commercial confidence and complete exemption under the RTI Act. 43 Nevertheless, it is reasonable to assume that the police in India would not have access to technology more advanced than that of the FBI. Indeed, the Delhi Police has stated in an affidavit before the High Court of Delhi that its FRT sys- tem has an accuracy rate of 2 per cent. 44 In 2019, when the accuracy rate fell to less than 1 per cent, the Ministry of Women and Child Development reported that the system couldn’t even accurately distinguish between boys and girls. 45 In July 2022, after the Delhi Police was directed by the Central Information Commission to respond to the RTI request filed by one of the authors, the Delhi Police revealed that they treat any match with 80 per cent similarity as a positive result. Any match below 80 per cent is treated as a false positive result which requires additional “corroborative evidence”. Thus, a positive result by the FRT system used by the Delhi Police has a 20 per cent margin of error and even in cases where a positive result has not been generated, the Delhi Police may still investigate the false positive result.

Apart from accuracy issues, some biases are built into the datasets on which predictive policing and FRT models are built – what Richardson et al refer to as “dirty data” being produced as a result of “dirty policing”. 46 In India, it has been documented that the biases are a mix of representational (the types of individuals who usually interacted with the police or called the police emergency number), historical (given the use of police and goonda registers and the over-policing in low-income areas), and measurement biases (about how crime is recorded). 47 For instance, an Indian study that conducted an audit of commercial FRT systems in India found that “they tend to be biased against minority groups which result in unfair and concerning societal and political outcomes”. 48

C. Emotion recognition technology

Emotion recognition technology (ERT) is based almost entirely on the work of American psychologist Paul Ekman, who theorised that, “( o)f all the human emo- tions we experience, there are seven universal emotions that we all feel, transcend- ing language, regional, cultural, and ethnic differences”. 49 He further identified seven universal facial expressions for these universal emotions, which are anger, disgust, fear, surprise, happiness, sadness and contempt. These universal emotions

— in combination or independent of each other — are what ERT, in conjunction with FRT, seeks to identify and classify.

In January 2021, in an attempt to curb street harassment of women, Lucknow police announced the deployment of AI cameras to read the emotions of women through ERT. The Lucknow Police Commissioner explained this deployment of ERT as “We will set up five AI-based cameras which will be capable of sending an alert to the nearest police station. These cameras will become active as soon as the expressions of a woman in distress change.” 50


Accuracy concerns

Experts have repudiated ERT as being a pseudoscience with links to harmful and outdated theories such as phrenology. 51 The reason behind this is ERT’s attempts to correlate a person’s facial expressions with emotional state when such linkage is not possible. 52 Several factors play an important role in the display of what a person feels. These include body language, tone of voice, changes in skin tone, personality as well as the context in which these emotions are generated and expressed. 53 The Association for Psychological Science went through more than 1,000 studies and concluded that “the relationship between facial expression and emotion is nebulous, convoluted and far from universal”. 54

Thus, going back to the use by the Lucknow Police, there are legitimate con- cerns regarding the accuracy of ERT in improving policing or reducing crime, specifically the ‘suitability’ of the AI in assessing a woman’s emotional state. The current state of ERT is incapable of distinguishing emotions of fear (of harass- ment) from distress/anger/anxiety (because the woman is getting delayed or needs to visit a bathroom), and any use of ERT will likely result in many false positives and false negatives. 55 Given the absence of credible research that demonstrates a strong correlation or causal link between emotions and facial expressions and movements, 56 it is dangerous to rely on the use of ERT for law enforcement.

More importantly, however, the Lucknow Police’s proposal is fraught with privacy risks given that it did not have clarity on the “expressions” of the woman that would trigger an alert; why/how the AI-based camera would only focus on women; the data storage and sharing mechanism employed; or the oversight and accountability mechanisms to protect privacy and reduce misuse.

Having understood the different types of AI models used for law enforcement purposes, we turn to examining the role of the private sector in collaborating with law enforcement agencies and promoting the use of AI.

The Role of the Private Sector

All private sector entities that work on the development, sale and deployment of any kind of technology, whether a simple pressure cooker or a complex vehicle, have to abide by certain standards and regulations set in place by the govern- ment and regulators to ensure their quality and safety. However, there is little to no oversight or regulation over how the private sector is developing, selling and deploying AI. The deployment of AI in law enforcement has been facilitated in part by the heavy involvement of the private sector, whether by developing the technology being used by police departments, creating the database against which facial features would be matched, or providing active support to law enforcement agencies to review CCTV footage during the investigation. There is no global treaty or agreement that regulates AI technology for law enforcement, nor is there a specific regulation in India that would regulate the development, sale, and use of these technologies in India.

While standards for ensuring the quality of these technologies are essential, it is also important to assess the effect of their use on human rights. As we will detail in the next section, the use of AI for law enforcement can lead to violations of privacy and other human rights. Despite this, it is unclear if private sector companies conduct any due diligence assessment for violations of human rights that may occur as a result of the use of their technology by authorities.

An important question then arises – Is this practice due to ignorance or wilful? A study conducted by Corporate Human Rights Benchmark found that human rights are not an important consideration for 200 leading companies. 57 Further, there are vast amounts of money to be made by selling surveillance technology, a phenomenon which has led to the coining of a term called “terror capitalism”. Terror capitalism justifies the exploitation of subjugated populations by defining them as potential terrorists or security threats. 58

Darren Byler, who coined the term, describes the process of profit generation from terror capitalism in the following three steps – First , governments award profits to private companies to build and deploy policing technologies aimed at target groups. Second , with the use of biometric and social media data extracted from those groups, companies improve their technologies and sell retail versions to other states and institutions. Finally , this whole exercise turns target groups into a source of cheap labour – either through direct coercion or something as indirect as stigma. 59

One of the most famous examples of corporate involvement is Clearview AI, an American company that scraped around three billion photos from the web without the individuals’ knowledge or consent to create a facial database that was subsequently used by law enforcement and government agencies across the US, including the New York Police Department. 60 Clearview AI argued that it “has a First Amendment right to access public data”. 61 In 2021, it was, however, widely reported that the NYPD sidestepped its internal policies (preventing the creation of an unsupervised facial images repository for use by FRT and instituting access control mechanisms to regulate access to the facial recognition system); and used the technology in live investigations; and its officers misused the Clearview AI app for personal use and shared the data with immigration enforcement. 62

While there may not be a direct parallel in India, the Clearview example provides an important insight into the perils of collaboration between private companies and law enforcement in the absence of sufficient regulation that would put in place standards to protect the interests of the general public.

Similarly, as stated above, the development, sale and deployment of AI tech- nology in India is happening without any legal regulations in place. Indian tech- nology startups such as Staqu, 63 Innefu, 64 and FaceTagr 65 are at the forefront of developing AI-based “solutions” for law enforcement purposes, especially FRT. However, there is little publicly available information on how they have developed these technologies. Tech startup Dragonfruit AI provides law enforcement agen- cies/police departments with video search and summaries solution capability that can be easily overlaid on existing CCTV infrastructure to give quick information during a crisis. 66 It reportedly works on using AI fast forward to condense hours of video evidence and expedite investigations. 67 A perusal of the company’s website does not provide clarity on the privacy protections put in place by the police/ Dragonfruit AI to prevent misuse of sensitive law enforcement data.

While some countries such as the US and India are operating in a legal vacuum, others, for instance, China claim to protect citizen rights by regulating the use of AI technologies. However, the reality of the situation in China is quite different. 68 The Chinese government uses FRT and CCTV surveillance to look “exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review”. 69 Further, non-binding regulations 70 on the use of FRT in China have made Chinese companies a global leader in the sale of surveillance technology. China is home to companies such as Huawei, Hikvision, Dahua, and ZTE, which supply AI surveillance technology to 63 countries. 71 Huawei alone is the supplier to 50 countries. 72 Further, Chinese companies have acquired patents that target the Uighurs in China, a community that has histori- cally been discriminated against. 73 These include:

The example of China as well as the US should thus serve as a cautionary tale on the dangers of unregulated and unrestrained collaboration between law enforce- ment agencies and private enterprises in India. Given the focus on state agencies, the significant role of private corporations in the AI law enforcement ecosystem often gets overlooked.

Concerns: Loss of Privacy and the Rise of Mass Surveillance through the Use and Misuse of AI in Law Enforcement

In this section, we try to understand how AI systems such as FRT and ERT affect individuals, given that they retain their privacy even in the public sphere. These technologies can have long-lasting effects on an individual’s civil liberties and also be used to subjugate entire communities. One of the foremost effects of the use of these technologies is the resulting loss of an individual’s privacy in a public place due to the continuous tracking and monitoring facilitated by AI systems. Further, the threat of being identified and targeted by these technologies results in a chilling effect on the fundamental rights to free speech, assembly, and protest. Additionally, since these technologies are being used in a legal vacuum, there is little oversight and opportunity for seeking recourse in case a rights violation does occur.

A. Privacy in Public Places

Before evaluating the concerns of mass surveillance and loss of privacy due to the use and misuse of AI systems, it is important to understand how the Supreme Court of India views privacy in a public space. An individual does not completely lose her reasonable expectation of privacy simply because she has ventured on a public street, especially since privacy in India, is not limited to property or places, but attaches itself to individuals. As Chandrachud J observed for the plurality opinion in K.S. Puttaswamy v. Union of India: 78

33. Austin in his Lectures on Jurisprudence (1869) spoke of the distinction between the public and the private realms: jus publicum and jus privatum. The distinction between the public and private realms has its limitations. If the reason for protecting privacy is the dignity of the individual, the rationale for its existence does not cease merely because the individual has to interact with others in the public arena . The extent to which an individual expects privacy in a public street may be different from that which she expects in the sanctity of the home. Yet if dignity is the underlying feature, the basis of recognising the right to privacy is not denuded in public spaces. The extent of permissible State regulation may, however, differ based on the legitimate concerns of governmental authority.”

323. Privacy includes at its core the preservation of personal intimacies, the sanctity of family life, marriage, procreation, the home and sexual orientation. Privacy also connotes a right to be left alone. Privacy safeguards individual autonomy and recognises the ability of the individual to control vital aspects of his or her life. Personal choices gov- erning a way of life are intrinsic to privacy. Privacy protects heterogeneity and recognises the plurality and diversity of our culture. While the legitimate expectation of privacy

may vary from the intimate zone to the private zone and from the private to the public arenas, it is important to underscore that privacy is not lost or surrendered merely because the individual is in a public place. Privacy attaches to the person since it is an essential facet of the dignity of the human being.”

Bobde J further concurred with this view, stating:

403. Every individual is entitled to perform his actions in private. In other words, she is entitled to be in a state of repose and to work without being disturbed, or other- wise observed or spied upon. The entitlement to such a condition is not confined only to intimate spaces such as the bedroom or the washroom but goes with a person wherever he is, even in a public place. Privacy has a deep affinity with seclusion (of our physical persons and things) as well as such ideas as repose, solitude, confidentiality and secrecy (in our communications), and intimacy. But this is not to suggest that solitude is always essential to privacy. It is in this sense of an individual’s liberty to do things privately that a group of individuals, however large, is entitled to seclude itself from others and be private. In fact, a conglomeration of individuals in a space to which the rights of admission are reserved—as in a hotel or a cinema hall—must be regarded as private. Nor is the right to privacy lost when a person moves about in public. The law requires a specific authorisation for search of a person even where there is suspicion. [Narcotic Drugs and Psychotropic Substances Act, 1985, Section 42] Privacy must also mean the effective guarantee of a zone of internal freedom in which to think. The disconcerting effect of having another peer over one’s shoulder while reading or writing explains why individuals would choose to retain their privacy even in public. It is important to be able to keep one’s work without publishing it in a condition which may be described as private. The vigour and vitality of the various expressive freedoms guaranteed by the Constitution depends on the existence of a corresponding guarantee of cognitive freedom.”

B. Effect on free speech, right to assemble and protest

The use of AI tools such as FRT and ERT encourages mass surveillance – both in their deployment in crowded public spaces, where they are used to survey the movements of individuals; and at the back end, where they are used to scan datasets for identification. These mass surveillance tools are increasingly being deployed to police the exercise of civil rights. This can have an impact in chilling the exercise of free speech and free assembly rights, guaranteed under Articles 19(1)(a) and (d) of the Constitution.

Knowledge, or even apprehension of surveillance, results in violating the pri- vacy of individuals, pushing them to censor their speech and conduct. This is because the use of FRT and other AI-based tools on individuals, especially protes- tors, journalists, civil society activists and members of vulnerable or marginalised communities, can lead to harassment, punishment or arbitrary detention. 79

There is a certain anonymity afforded to individuals who participate in a mass protest, which is taken away when they are subject to extensive FRT or drone surveillance. Protests are essential in a constitutional democracy such as India because they provide space for citizens to vent their grievances, voice their opposition to government policies and try and influence these policies. In this manner, AI tools can undermine the right to peaceful assembly, guaranteed under the Constitution, and deter people from participating in protests. 80 Used in this manner, AI can facilitate surveillance and social control. 81

C. Lack of legislative or judicial oversight

The violation of fundamental rights can only be justified if the restriction is rea- sonable and proportionate. The Supreme Court in the Puttaswamy Privacy and Aadhaar judgments laid down a proportionality test, under which any restriction on fundamental rights is justifiable if passed under an adequate law, is a suitable measure to achieve a legitimate goal, is necessary for achieving the said goal (and there is no less restrictive alternative), and balances competing rights adequately. 82 The legal framework authorising the deployment of AI in law enforcement tools is not satisfied since there is no objective legislation that governs the use of AI systems such as predictive policing algorithms, CCTV cameras, FRT and ERT in India. 83 For instance, the legal basis on which the AFRS stands is unclear. Responding to a legal notice from the Internet Freedom Foundation, the home ministry has traced the legal basis for the AFRS to a Cabinet Note from 2009, 84 which, at best, is a document of procedure and a record of proceedings of a Cabinet meeting. It can be amended at will by the government and has no legal consequence. 85

The absence of a law is particularly relevant given that India has not enacted any data protection legislation. The success of AI models in law enforcement relies on vast and detailed data collection and scraping mechanisms that help build large datasets against which a facial image/feature can be compared. However, there are no accountability mechanisms that regulate the collection, storage and usage of data, which lies at the heart of AI use in law enforcement.

The datasets on which the AI is trained are collected without the knowledge and consent of the individuals concerned, which is a clear infringement of one’s right to privacy and determining the use of her data. This is a particularly serious problem with building predictive police models – apart from privacy concerns, the underlying data on which the AI modelling is done comprises historically skewed and biased police data that over-represent overpoliced and marginalised communities. 86

There appears to be a lack of clarity on the kind of data that will be tapped into and the manner of designing indigenous AI tools for Indian law enforcement. For instance, the new Criminal Procedure (Identification) Bill 2022, which mandates the retention of data collected for 70 years, will also likely play a key role in build- ing a database that may be accessed by AI systems for law enforcement. Given the various shortcomings of the Act, 87 especially its wide mandate regarding persons (those accused or detained for the breach of any law) from whom data may be collected, the Act may facilitate grave violations of the right to privacy and enable State-sponsored mass surveillance.

Finally, there are no procedural guarantees against abuse of such interference. The protection of databases containing personal data is also a concern, given the hacking of Maharashtra’s Criminal Investigation Department website in 2020. 88

Way Ahead

There is an urgent need to reduce crime, especially those against women in the country. However, using AI as a silver bullet – even assuming improvements in accuracy, explainability and transparency – will not solve underlying problems plaguing law enforcement in India. Instead, the government’s focus should be on improving policing, increasing the deployment of policemen, especially women officers, on the streets, making it easier for women to interface with the police, and improving the judicial process.

Meanwhile, the use of AI must be regulated by laws and universally accepted technical specifications (made in consultation with civil society); and not just ethical norms and standards that are unenforceable and often unknown to the public. 89 Finally, as a society and polity, we need to seriously re-examine the role played by private corporations and the need for businesses to respect human rights. 90

Editors’ Comments

This chapter analysed the use of AI for policing, but as identified by chapters earlier, and the next one, there are several other possible uses of AI. The next chapter provides a conceptual survey of the literature on mining legal data along with a description of the possible use cases. This chapter also begins the focus of the volume on an exclusive judicial context.

References