Delhi: Technologies based on Artificial Intelligence (AI) and Machine Learning (ML) have seen dramatic increases in capability, accessibility and widespread deployment in recent years and their growth shows no sign of abating. While the most visible AI technology is marketed, learning-based methods are employed behind the scenes much more widely. From route-finding on digital maps to language translation, biometric identification to political campaigning, and industrial process management to food supply logistics, banking & Finance to healthcare, AI saturates the present-day connected world in every aspect.
Col. Inderjeet explains how Deepfakes is the most serious AI Crime threats.In addition, with the adaption of AI for all the productive use cases, there are AI-enabled cybercrimes such as identity frauds – based on deepfakes or synthetic images and deepfake videos– which is already growing.
Deepfakes is the fastest-growing tactics being used by fraudsters. Fraudsters are now turning to synthetic identities to open new accounts. Identities of any person can be completely faked or a unique amalgamation of false information, which may have been stolen or modified. Personal Identifying Information (PII) that may be hacked from a database (phished from an unsuspecting person) or bought from the dark web. Because of the limited impact on those whose PII has been compromised or stolen, often this kind of frauds will go unnoticed for longer than traditional identity frauds.
Also Read: Deepfakes: The dark side of Artificial Intelligence
Deepfakes are the result of using artificial intelligence to digitally recreate an individual’s appearance with great accuracy, enabling someone to literally make it look like someone is saying something that they never said, or appeared someplace that they have never been. YouTube is rife with examples of varying quality, but it is easy to see how a well-made deepfake could be damning to someone who is targeted maliciously.
Deepfakes have been ranked as one of the most serious Artificial Intelligence (AI) crime threats based on the wide array of applications it can be used for criminal activities and terrorism.
When the term was first coined, the idea of deepfakes triggered widespread concern mostly centered around the misuse of this technology in spreading misinformation, especially in politics. Another concern that emerged revolved around bad actors using deepfakes for extortion, blackmail, and fraud for financial gain.
The rise of deepfakes and synthetic AI-enabled technology makes it easier for fraudsters to generate very realistic-looking images or videos of people for these synthetic identities to commit serious levels of frauds. There are plenty of Mobile Apps that allow anyone to convincingly replace faces of celebrities with their own, even in videos and turning into viral social media content.
Fake audio or video content has been ranked by experts as the most worrisome use of Artificial Intelligence in terms of its potential applications for cybercrime or cyber terrorism, according to a Researchers from University College London have who has released a ranking of what experts believe to be the most serious AI crime threats.
One of the studies, published in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and available as a policy briefing), identified 20 ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern—based on the harm they could cause, their potential for criminal profit or gain, how easy they would be to carry out, and how difficult they would be to stop.
Also Read:Dealing with Deepfakes and FakeNews
Most worried are the audio/video impersonation, followed by tailored phishing campaigns and driverless vehicles being used as weapons. Fake content would be difficult to detect and stop and that it could have a variety of aims - from discrediting a public figure to extracting funds by impersonating a couple's son or daughter in a video call. Such content may lead to a widespread distrust of audio and visual evidence, which itself would be societal harm.