AI is becoming a weapon for cyber attacks

ai is becoming a weapon for cyber attacks 65735b8bbe519 | Dang Ngoc Duy

Forecasting cyber security threats in 2024, security research group FortiGuard Labs (USA) said that artificial AI is tending to be “weaponized” by bad guys for cyber attacks. This technology is being applied at many stages, from defeating security algorithms to creating deepfake videos to imitate behavior and voice to deceive users. In the coming time, hackers will take advantage of AI in new ways, making it impossible for security systems to keep up.

One of the first worrying trends is the use of artificial intelligence to fake personal profiles. Specifically, after taking data from social networking platforms and public websites, hackers will use AI to combine information and create fake profiles with high authenticity, increasing the likelihood of success in the fraud process. . According to FortiGuard Labs, security units will face a big challenge when having to recognize and handle a series of “virtual people” in cyberspace.

The AI symbol is placed on the computer motherboard. Photo: Reuters

The AI symbol is placed on the computer motherboard. Photo: Reuters

Password security also becomes more difficult with the intervention of AI. Currently, password cracking methods revolve around the process of predicting and trying many different character strings. Using machine learning tools, AI can analyze frequently used passwords, find common characteristics and identify test patterns with high accuracy, significantly reducing code detection time.

AI is also capable of overcoming password protection measures, including blocking access when entering incorrectly many times in a short period of time. By identifying the rules of the security system, artificial intelligence can regulate the detection speed to avoid detection. Some AI models are even trained to handle captcha themselves – a tool that helps distinguish between robots and users when logging in.

Also according to FortiGuard Labs experts, AI models are also exploited by hackers through “AI poisoning attacks”. Specifically, right from the training stage, hackers will find ways to penetrate the system and damage the data source, causing AI to develop incorrectly or have unwanted behavior, causing serious damage to the owner. property. These flawed AIs pose many risks when applied to real life, especially in fields such as self-driving cars, healthcare or security.

On the contrary, experts say that AI can also be applied to fight cyber attacks. In a report in June, the research team at Fortinet successfully controlled AutoGPT, an AI based on the GPT-4 model, to take steps to enhance network security. This artificial intelligence receives tasks from humans, divides the tasks into stages, then launches “AI agents” to analyze and make decisions. It even automatically finds and downloads the necessary security tools while playing the role of cybersecurity officer.

“Although still in its infancy, AutoGPT shows the promise of AI helping secure computer systems with the ability to fine-tune processes and find ways to solve problems without human suggestions,” FortiGuard reports. Labs stated.

To combat hackers applying AI to cyber attacks, security systems need to strengthen continuous monitoring, access control, protection of AI training data, application control, behavioral analysis, User testing and authentication. At the same time, organizations also need to narrow the gap in cybersecurity skills, increase information sharing and experience in handling security incidents, thereby weakening cybercriminal networks.

Hoang Giang

Leave a Reply

en_USEN