Logo
Blog

Timeline

Blog

IS IT ALRIGHT TO TRUST ALGORITHMS ALTOGETHER?

Whenever we think of the film Minority Report, predictive policing might come to our mind. In the film, a clairvoyant group foresees a crime and police arrest individuals based on their information before the crime gets executed. But algorithmic policing is nothing like the Minority Report. Here, a unique algorithm uses data on the times, locations and nature of past crimes to provide police strategists information concerning where and at what times police patrols should patrol, or maintain a presence, to prevent and detect crime. Such predictions made by the algorithms help in the intelligent targeting of police resources.

As days go by, we will find algorithms playing an increasingly active role in many facets of our lives. The legendary Silicon Valley entrepreneur, Mr Vinod Khosla, has referred to the contemporary age as Dr A’s age because Dr Algorithm is heralding a healthcare revolution, where we may not need a doctor because of A.I., big data, and diagnostics that would meet 90-99 percent of our health needs.

An algorithm “is a step-by-step protocol for calculations. We can use Algorithms for calculation, data processing, and automated reasoning.” Algorithms are becoming a ubiquitous part of our lives. Algorithms are today being deployed in diverse fields. For instance, in law enforcement, we are using it for predictive policing, while on roads, red-light and speeding cameras are detecting transgressions of the law. In border control, Al is flagging travellers and their baggage for screening. In finance for credit scoring, for instance, the FICO tally is determining an individual’s creditworthiness. For intelligence collection and surveillance, CCTV cameras are spotting unique activity by computer vision analysis. In the military, warfare drones and other robots are discovering targets and killing without human intervention. Some dating sites such as eharmony and others promise to use maths to find a person’s soul-mate and the perfect match.

The tantalising possibility of portending crime before it transpires has probably got law enforcement agencies most excited about algorithmic policing. Police Departments and courts in the USA and several nations have incorporated crime-predicting algorithms, facial recognition, and pretrial and sentencing software deep inside their legal system. Algorithms consistently’ are more accurate than people in predicting recidivism. In some tests, the tools approached 90% accuracy in predicting which defendants might get arrested again. Predictive analytics uses historical data to predict future events. Typically, we use historical data to build a mathematical model that captures essential trends.

Most times, the patterns inherent in the crimes themselves provide ample information to predict which places and windows of time are at the highest risk for future crimes. The basic assumption behind predictive policing being that a lot of crime is not random. For example, home burglaries are relatively predictable. When a house gets robbed, the likelihood of that house or places near it getting stolen spikes again in the following days. In such a prediction method, crimes get separated from individuals, and a visible law enforcement presence can be an effective deterrent for subsequent offences.

Proponents argue that predictive policing can help predict crimes more accurately and effectively than traditional police methods. Predictive policing is just one of several ways police departments worldwide have incorporated big-data techniques into their work in the last two decades.

Predictive policing programs are currently under use by police departments in several U.S. states such as California, Washington, South Carolina, Arizona, Tennessee, and Illinois. Predictive policing programs are also under implementation by Kent County Police in the U.K., Netherlands, and Suzhou Police Bureau in China. India, too has set out towards big algorithmic policing in its own way. Gurgaon based startup called “Staqu” is using big data for identifying criminals and finding missing persons. Staqu launched an AI-based human efface detection (ABHED) application for effective policing. The startup has integrated the app with the database of police of eight Indian states, including Rajasthan and Punjab, for identifying criminals by facial recognition. Although the algorithmic formulas’ writers would endorse the algorithms to be perfectly neutral, the actual truth could be something else. Algorithms often could get coloured with the bias of the person who has authored it. How can we know how a black box algorithm protected by intellectual property law as a trade secret is behaving? For instance, the FICO algorithm, which plays a significant role in Americans’ access to credit, which earns hundreds of millions of dollars each year, never gets disclosed. It is a closely guarded secret.

The near-total absence of transparency in the algorithms that drive the world means that we, the people, have no insight and no say in profoundly crucial decisions being made about us and for us. The concentrated power of algorithms to harm us has gone unnoticed by most until now. Without insight and transparency into the algorithms that are running our world, there can be no accountability or true democracy. As a result, the twenty-first-century society we are building is becoming increasingly vulnerable to manipulation by authors who operate the algorithms that pervade our lives.

Algorithmic technology has infuriated several activists. They contend that the technology, which predicts crime and violence instead of reducing crime, has led to over-policing and mass imprisonment, perpetuation of racism and increased tensions between police and communities. Still, algorithms meant to foretell where crime would happen, often justifies massive and often fierce deployment to neighbourhoods already suffering from poverty. Ultimately, these algorithms have failed to reduce the costs forcing taxpayers to cough up more money for policing. A few local governments have already placed moratoriums on algorithmic systems in recent months. Santa Cruz, California, has become the first city in the USA to ban predictive policing algorithms.

The recent pandemic witnessed the release of thousands of people from jails where social distancing is near-impossible. At the same time, police officers, afraid of overcrowding jails, curtailed arrests. These changes led to a crime drop in several cities. Our experience during the Covid-19 outbreak, therefore, is confirming that police can arrest and jail far fewer people without jeopardising security. Hence, affirming the requirement for snuffing out Algorithmic decision making.

Today, big data, cloud computing, artificial intelligence, and the Internet of Things act on physical objects on our behalf in 3-D space. Having an A.I. driving a robot that cleans your house and makes coffee for you is fine. What if the robot’s algorithm mistakenly detects the owner as a threat and eliminates him as with Kenji Urada, the thirty-seven-year-old employee of Kawasaki. In 1981, the robot crushed Urada to death by pushing him into a grinding machine.

Finally, we have entered a new era where the algorithm rules. Algorithms determine what search results we see with Google. If the human brain’s intelligence can be wrapped up in a particular algorithm, imagine what it would mean for A.I. The same algorithm could apply to how A.I. neural networks work. Further, what if we could make machines driven by algorithms conscious? Could we program them to contain a soul?

Source from: epaper/dtnext/chennai/dt:11.04.2021

Dr.K. Jayanth Murali is an IPS Officer belonging to 1991 batch. He is borne on Tamil Nadu cadre. He lives with his family in Chennai, India. He is currently serving the Government of Tamil Nadu as Additional Director General of Police, Law and Order.

Leave A Comment