Logo
Blog

Timeline

Blog

THE GROWING THREAT OF A.I. ASSISTED CRIMES

Anand Gandhi’s “OK Computer” is a sci-fi comedy series set in 2031 starring Radhika Apte, Jackie Shroff and Vijay Varma. In the film – On a beautiful moonlit night in a tranquil coastal town in North Goa, when a self-driving car bangs into a pedestrian and kills him instantaneously, the police get confronted with three irking questions regarding the culpability of the crime- Is the CEO of the taxi company culpable? Or is the programmer culpable? Or is the car itself with the A.I. system culpable? When the police commence investigations, detective Vijay Varma uncovers it to be wilful murder. Still, Radhika Apte, who heads an organisation for the ethical treatment of robots, disputes it as she believes that A.I. is incapable of harming humans. The questions that the show hurls at us are whether an A.I. can enable or commit a crime, and if an A.I. commits a crime, who should be culpable?

Technology is a double-edged weapon. With the advent of the internet, we had internet crimes, and with the inception of social media, crimes on social media proliferated. “OK, Computer” may be pure fiction. But Artificial intelligence (A.I.), could play an increasing role in committing and enabling crimes in the future. Particularly going by the rampant proliferation of A.I. in various sectors, especially public safety, administration, finance, etc.–the attack on such AI-based systems is likely to rise. Many criminal, political and terror scenarios could arise from targeted disruption of such systems.

For instance, AI-generated, Fake content in media could lead to widespread mistrust and deterioration of faith in audio and visual content. Deep fakes are getting extraordinarily sophisticated, convincing and more challenging to prevent. Fake content in social media has, frequently, affected democracy and national politics. For instance, a tailored video of a drunk Housekeeper, Nancy Pelosi, speaking in a slurring manner garnered over 2.5 million views on Facebook in 2020. Using A.I, a U.K. based organisation called Future Advocacy in 2019 created a deep fake A.I. video showing election rivals Boris Johnson and Jeremy Corbyn advocating each other for the post of Prime Minister. Though there are algorithms to detect deep fakes online, several avenues are available for manipulated videos to spread undetected. Creating means of detecting the deep-fake at the point of upload may be the need for the hour. GAN (“generative adversarial network.”), an A.I. technique recently invented at Stanford has enabled the generation of hoaxes, doctored video, and forged voice clips easier to execute with excellent results.

Further, in a democracy, A.I. could also threaten the fundamental rights of its citizens. For instance, politicians or parties who have the power and authority could use A.I. to analyse mass-collected data and create targeted propaganda to mislead them. During elections, they could circulate fake videos for social manipulation and deception.

Furthermore, A.I. technologies power autonomous systems. Autonomous vehicles may be in their infancy, but they could become more common in the future and run the risk of being repurposed as weapons. Criminals could load an autonomous vehicle with explosives and send it to an earmarked destination, or they could hack an autonomous vehicle and use it to damage property or attack pedestrians. Further, it may be possible to control an autonomous car through computer hardware or software. A malicious attacker taking advantage of security gaps could take over a car or even cause it to crash willfully. The ability to utilise a vehicle without requiring a human at the wheel would likely dramatically speed up this practice. Autonomous drones at present are not being used for crimes of violence, but their mass and kinetic energy are potentially destructive if well-targeted. Criminals could also fit drones with weapons that could prove lethal in self-organising swarms.

Natasha Pajema, in her book “Rescind Order,” portrays a scenario of AI-based systems going haywire when an automated command-and-control system detects an incoming nuclear attack, and automatically gives the launch order for the nuclear weapon. The protagonist in the book cannot verify if the automated system has detected a false attack or if the attack is actual. The protagonist has precisely 8 minutes and 53 seconds to decide. “Rescind Order” narrates a heart-rending story of U.S. decision-makers steering a nuclear crisis in the year 2033, during a tricky era of autonomous systems, social media communication, and deep fakes, which we are likely to encounter shortly as well.

Another AI-based crime “Tailored phishing,” is likely to give sleepless nights to cyber-crime experts in which criminals would collect information by installing malware or through digital messages by creating an impression of a trusted party such as the user’s bank. The phisher plays with the existing trust to persuade the user to execute actions he would otherwise be wary of, such as revealing passwords or clicking on dubious links.

Likewise, culprits may use A.I. as a blackmail tool to harvest personal information from social media or large unique datasets like phone contents, browser history, etc. They may use them to tailor threat messages to their targets to blackmail them. A.I. could also generate fake evidence and assist criminals in sextortion. The latter involves using A.I. to hack into the computer or phone of the victim to extract videos or access personal pictures to blackmail the victim for sexual favours or money.

Criminals further could also use A.I. to poison data. For instance, a smuggler intending to smuggle weapons on board a plane could make an automated X-ray threat detector insensitive to firearms. Criminals could use A.I. to mislead an investment advisor into making unexpected recommendations which the criminal could exploit because of shifting market value. Criminals could also capitalise on the rampant proliferation of A.I. in various sectors such as Power or Food, leading to widespread power disruption to traffic gridlock and breakdown of food logistics. Systems with responsibility for public safety and security are probable to become crucial targets, as those systems dealing with financial transactions. Criminals could also use A.I. to trick face recognition systems, deny access to victims to online activities, and create AI-authored fake reviews. They may also use it for AI-assisted stalking, and forgery of content such as art or music.

Unlike conventional crimes, crimes in the cyber domain can be repeated, shared or sold to criminals for perpetrating crimes. UCL’s Matthew Caldwell suggests we may even witness the marketisation of AI-enabled crime soon with the advent of “Crime as a Service” (CaaS). To counter and deter such A.I. risks, there is a need for legislation of A.I. crimes within the cyber-crime framework.

Finally, A.I. is encroaching on the spiritual domain as well. We are today witnessing artificial intelligence and online houses of worship and robot priests. The pandemic is replacing traditional worship with virtual tools. The intersection of technology and spirituality is coming much faster than many expected. Digitally mediated religious communities, sometimes, are proving more attractive and allowing more connectivity than brick and mortar churches and temples.

Source from: epaper/dtnext/chennai/dt:16.05.2021

Dr.K. Jayanth Murali is an IPS Officer belonging to 1991 batch. He is borne on Tamil Nadu cadre. He lives with his family in Chennai, India. He is currently serving the Government of Tamil Nadu as Additional Director General of Police, Law and Order.

Leave A Comment