Deepfakes is the most used AI technology in crime and terrorism. Where fake audio or visual content has been ranked by experts as the most worrying use of artificial intelligence in terms of potential crime or terrorism applications, according to a new University College London (UCL) report.

The research team at London College for the first time identified 20 different ways of using AI criminals over the next 15 years, then they asked 31 AI experts to classify them according to risk, based on the potential for harm, the money they could earn, ease of use, and how difficult it was to stop criminals.

Deepfakes technology that is adopted by criminals to create AI videos for real people saying fictional things ranked first for two main reasons:

  1. It is difficult to identify and prevent them, as automated detection methods remain unreliable, and Deepfakes technology improves in deceiving human eyes, as a recent Facebook competition to discover them using algorithms led researchers to acknowledge that it is a big problem that has not been solved.
  2. Deepfakes can be used for a variety of crimes and bad deeds, ranging from denigrating public figures to getting paid from the public by impersonating people. Just this week, a video clip viral spread for Nancy Pelosi in which she looked very drunk, and Deepfakes technology helped criminals steal millions of dollars.

In addition, researchers fear that this technology will make people distrust audio-visual evidence, as this will cause major societal harm.

First author Dr Matthew Caldwell (UCL Computer Science) said: “People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.

The study also identified five other major AI threats: driverless vehicles as weapons, phishing, online data collection for blackmail, attacks on AI-controlled systems, and fake news.

But the researchers were not overly concerned about the (burglar bots) that enter homes through letter boxes and cat boards, as they are easy to pick up. They also classified the stalking with the help of artificial intelligence as a crime of low concern although it is extremely harmful to victims because it cannot work on a large scale.

Related Articles
Leave a Reply

Your email address will not be published. Required fields are marked *