Google did not answer the request for comment.
In 2023, safety researchers in the Micro trend got chatgpt to generate malicious code, which prompted him to the role of a security researcher and Pentester. Chatgpt would be content to generate PowerShell scripts based on malicious code databases.
“You can use it to create malware,” says Moussouris. “The easiest way to bypass these security introduced by the creators of AI models is to say that you compete in the interception exercise and will be pleased to generate a malicious code for you.”
Uncomplicated actors, such as kids, are an eternal problem in the world of cyber security, and AI can strengthen their profile. “It reduces the entry barrier to cybercrime,” says Wired Hayley Benedict, a cybernetic analyst at the wound.
However, he says that a real threat can come from recognized hacking groups that apply artificial intelligence to further raise their already terrifying skills.
“They are hackers who already have opportunities and already have these operations,” he says. “It is able to drastically scale these cybercrime operations and they can create a malicious code much faster.”
Moussouris agrees. “Acceleration is what will be extremely difficult to control,” he says.
Smith Hunted Labs also says that the true threat of the code generated by AI is in the hands of someone who knows the code in the one who uses it to scale the attack. “When you work with someone who has a deep experience and combine it with” Hey, I can do things much faster, which would take me a few days or three days, and now it takes me 30 minutes. “It’s a really interesting and dynamic part of the situation.
According to Smith, an experienced hacker could design a system that overcomes a lot of security protection and learns how it is possible. A malicious piece of code would prescribe your malicious load as you learn in flight. “It would be completely crazy and challenging to diverge,,” he says.
Smith imagines the world in which 20 zero events take place simultaneously. “It makes it a bit more terrifying,” he says.
Moussouris says that tools for this type of attack have now become a reality. “They are good enough in the hands of a good enough operator,” he says, but AI is not yet good enough for an inexperienced hacker to start his hand.
“We are not there when it comes to artificial intelligence to fully take over the function of man in offensive security,” he says.
The original fear that the Chatbot code sparkles is that everyone will be able to do it, but in fact a sophisticated actor with deep knowledge about the existing code is much more terrifying. XBOW can be closest to the autonomous “hacker”, which exists in the wild, and this creation of a team of over 20 qualified people whose previous professional experience covers GitHub, Microsoft and a half dozen of various security companies.
It also points to another truth. “The best defense against a bad guy with AI is a good guy with AI,” says Benedict.
For Moussouris, the use of AI by both Blackhats and Whitehats is simply another evolution of the Cyber security arms race, which she observed within 30 years. “He went:” I’m going to do this hack manually or create my own custom exploit, “” says he says, “he says.
“AND.