Natural language processing technologies continue to evolve and reach high levels. These technologies are among the most prominent applications of artificial intelligence currently in use.
In a small study, using the GPT-3 deep language learning model along with other technologies and services, researchers were able to craft the content of phishing emails with high quality.
This has raised concerns that fraudsters will rely on artificial intelligence techniques in the future. This is in addition to services available at low prices that may help them craft fraudulent messages. Specifically, phishing messages.
The use of artificial intelligence in fraud attacks
During the Black Hat and Defcon Conference in Las Vegas. A team at the State Agency for Technology in Singapore presented the results of an experiment that had been carried out.
The experiment relied on sending phishing emails. It is one of the most common types of hacking attacks. Also, the main objective of these messages is to get the recipient to click on the malicious link contained in them.
Those messages were sent to 200 people who did not know about the experiment. Part of these messages were written human, ie by team members. And the other part was written by artificial intelligence.
The links in those messages were not malicious links, of course, but they did tell the team how many times they were clicked. As you can expect, messages written by AI technologies get significantly more clicks on the spoofed link than messages written by humans.
The team states that developing an AI model capable of high performance costs millions of dollars, and is something that suits large corporations or governments, not individuals.
Ease of use in the future
On the other hand, scammers may not need to develop complete systems. Instead, they can use pre-made AI services, which can be easily manipulated and without the need for programming knowledge.
Open source AI models such as OpenAI and others can also be used.
During the experiment, the team used the OpenAI GPT-3 model in addition to relying on other available AI services, and these services helped to understand and analyze the target personality, in order to make it easier to deceive.
Despite the excellent results and numbers that have been achieved within this experiment. However, the responsible team made it clear that the target sample size, which was only 200 people, does not reveal final results, and if the same experiment is repeated on a larger sample, the results may differ.
Also, messages written by real people in the experiment may not be on the same level as messages written by professional scammers. However, the researchers praised the level of professionalism of the messages written by the AI.