In an era where everything is interconnected where data is the most important commercial good, cyber security continues to bring new things into a highly competitive market, the value of this sector is set to reach $248 billion by 2023, and the boom of this sector is attributed to the continuous growth and booms of cyber threats , which each year requires higher caliber weapons with better accuracy or wider spread.

Today’s Internet crimes are the crimes with the most financial temptation and the tools to commit them are widely available even to non-technical individuals. As for the results, they are really interesting and profits start with a few hundred dollars and can reach tens of thousands. For example, a hacker who spreads ransomware can He earns about $84,000 per month on average.

This is a very profitable and accessible business, so it will definitely not rest. It is expected that in the future all our connected devices will be constantly attacked as it becomes more difficult and more complex to detect cyber attacks than ever before. Loss of revenue and possible end of business operations, if there are no fines or even loss of life.

As a result, the cybersecurity market will continue to grow, as hackers will provide more sophisticated tools, and in turn, security companies will have to develop their own tools to protect customers and provide quality solutions to the attacking malicious technologies.

On both sides of this war, emerging technologies will continue to play a major role, and AI is no exception to the equation, as cybercriminals can use AI designed for legitimate use cases and adapt it to illegal schemes.

Almost all Internet users are familiar with CAPTCHA; It’s a tool that’s been around for decades now to stand up to credential stuffing by offering non-human bots the challenge of reading garbled text, but two years ago a study by Google found that optical character recognition (OCR)-based technology, Based on machine learning, it can solve 99.8% of these challenges.

Criminals also use artificial intelligence to hack passwords faster, as they can speed up these attacks using deep learning, as researchers fed purpose-built neural networks with tens of millions of leaked passwords, and asked them to generate hundreds of millions of new passwords, which they investigated in an experiment Only one has a success rate of 26%; Not a small number for an initial experiment.

Considering the case of the black market for cybercriminals tools and services, artificial intelligence can be used to make operations more efficient and profitable, in addition to identifying the targets of attacks, cybercriminals can start and stop attacks with millions of transactions in just a few minutes, due to the fully automated infrastructure.

Read also : Apple And Google Are Changing The Way We Treat Passwords

According to a paper published by Malwarebyte’s titled When Artificial Intelligence Goes Wrong, AI technology can usher us into an unwanted new era of malicious software called Malware 2.0. Although there are currently no examples of such AI malware, if the technology opens up new avenues for profit, “hackers will queue to buy it on the dark market or to use its open source versions of GitHub […],” according to the paper.

The biggest concern with using AI in malware is that new strains will be able to learn from incidents of detection. If a strain of malware can determine why it was detected, it can avoid the same behavior or characteristic next time. It is caused by a hack, for example, automated malware authors can rewrite it, and if basic attributes are detected, some randomness can be added to pattern matching rules.

The use of artificial intelligence can also improve the way some variants of “Trojan” programs work, which are small codes that are inserted into the code of well-known and well-known programs, to perform some malicious tasks in secret, and these tasks are often based on weakening the firewalls of The target or hack his device and steal his data, as artificial intelligence may help these programs to create new file versions for themselves to deceive detection procedures.

Read also : Microsoft Officially Announces Windows 11

In the face of this rapidly evolving threat, cybersecurity will leverage the power of AI itself, and advanced antivirus tools can leverage machine learning to identify programs exhibiting unusual behavior, scan emails for indications of phishing attempts, and automate system data analysis. or network to ensure continuous monitoring.

Given that the cybersecurity industry faces a growing skills gap, we can reasonably expect investments in “smart” cybersecurity systems to be the best future course of action.

In 2019 at the RSA Security Systems conference, Symantec’s chief technology officer spoke of three cases he witnessed in which hackers used computers that generated “deep fake” voices to deceive millions of employees at thousands of companies. In one case, for example, Someone pretending to be the CEO of a company called an employee in the financial department and requested an urgent transfer of $10 million.

Top CEOs often appear in public at their companies, which creates a lot of opportunities for hackers to obtain and transcribe audio recordings, and if the potential return is several million dollars, what is to prevent a hacker from calling the executive with a pretext and engaging them in a conversation to obtain a recording?

Read also : what languages do i need to learn to develop android apps ?

Deep-fake calls can also be used to make employees change payment information on accounts, open phishing emails, reset passwords, stop security measures, and a host of other malicious activities, said Maria Bada, a senior researcher at the Cambridge Cybercrime Center at the University of Cambridge. To do this, criminals are using Lyrebird, an app that allows anyone to memorize their voice so that the robot can speak sentences written in the same voice.

The more data an AI system has, the smarter it gets. Well, there’s a wealth of stolen passwords to take. At the beginning of June, researchers from the US Stevens Institute of Technology presented a new and improved version of their PassGan project. The idea in 2017, based on training data from leaked passwords, then tested against another set of leaked passwords from LinkedIn and was able to crack 27% of them.

PassGan has since been upgraded to use a similar form of reinforcement learning to the one that AlphaZero used to learn to play chess, as it can now automatically adapt during a continuous attack to keep improving password guessing. This is another reason data centers are moving away from traditional passwords. To multi-factor authentication systems that are not limited to a password.

Derek Manke, a global security strategist at Fortinet, a California-based multinational company that develops and markets cybersecurity products and services such as firewalls, antivirus and intrusion prevention software, said hackers are also using artificial intelligence and machine learning to automate attacks against Enterprise networks, such that AI and machine learning allow cybercriminals to create malware that is smart enough to search for vulnerabilities and decide what payload to take advantage of them. This means that the malware does not have to connect back to the command and control servers, and can get away with it with impunity.

Artificial intelligence and related systems are becoming ubiquitous in the global economy, and the same is happening in the covert criminal realm. The source code, data sets, and methods for developing and maintaining these powerful capabilities are open and accessible to everyone. If you have a financial incentive to use this for malicious purposes, that’s it. Where the investments and focus will be, data centers should adopt a zero-trust approach to detecting malicious automation.

Related Articles
Leave a Reply

Your email address will not be published. Required fields are marked *