Q: How can artificial opposite intelligence play the role of cyber attack and how does the opponent’s artificial intelligence depict?
AND: Cyber attacks exist along the spectrum of competence. At the lowest end there are so -called kiddie scenarios or threatened actors who splash known exploits and malware in the hope of finding a network or devices that did not practice good cyber hygiene. In the middle there are cyber mercenaries who are better resolved and organized to feed after enterprises with ransomware or extortion. And, at the highest level, there are groups that are sometimes supported by the state, which can start the most arduous to detect “advanced persistent threats” (or apts).
Think about the specialized, wicked intelligence that you attacked the marshal – this is the opposite intelligence. The attackers create very technical tools that allow them to break into the code, choose the right tool for the target, and their attacks have many steps. They learn something at every stage, integrate it with their situational awareness, and then decide what to do next. In the case of sophisticated APTS, they can strategically choose their goal and develop a plan of ponderous and low visibility, which is so subtle that its implementation sneaks with our defense covers. They can even plan misleading evidence indicating another hacker!
My research goal is to repeat this particular type of offensive or attacking intelligence, intelligence, which is an opponent oriented (based on intelligentsia, which are the actors of threatened people). I apply artificial intelligence and machine learning for the design of cybernetic agents and model the opposite behavior of human attackers. I also model learning and adaptation that characterize cyber races.
I should also notice that the defense cyber is quite complicated. They have evolved their complexity in response to the escalational possibilities of attack. These defense systems include the design of detectors, processing logs, releasing the appropriate alerts, and then the solution. In incident response systems. They must be constantly vigilant to defend a very gigantic attack surface, which is arduous to track and very lively. On the second side of the striker-Refren-Defender competition, my team and I also came up with artificial intelligence in the service of these different defense fronts.
Another thing stands out about the opposite intelligence: both Tom and Jerry can learn from competing with each other! Their skills sharpen and lock themselves in the arms race. One is getting better and then to save the skin, it also becomes better. This improvement of Tit-to-Tat goes on and up! We are working on repeating the cybercriminals of these arms races.
Q: What are the examples in our daily lives in which artificial intelligence was against security? How can we apply opposing intelligence agents to overtake threatened entities?
AND: Machine learning was used in many ways to ensure cyber safety. There are all kinds of detectors that filter threats. They are tuned for anomal behavior and, for example, to recognizable types of malware. There are segregation systems with AI support. Some of the spam protection tools on a mobile phone are supported by AI!
With my team I design cybercriminals serving AI who can do what the actors do. We invent artificial intelligence to provide our cybercriminals with computer skills and programming knowledge to make them able to process all kinds of cybernetic knowledge, plan steps and make informed decisions within the campaign.
Clever agents opponent (like our AI cybercriminals) can be used as a practice when testing network defense. A lot of effort is aimed at checking the reliability of the network to attack, and AI is able to lend a hand in this. In addition, when we add machine learning to our agents and to our defense, they play the arms race, we can check, analyze and apply to predict what remedies can be used when we take means to defend ourselves.
Q: What fresh risk they adapt and how do they do it?
AND: It seems that it is never the end of fresh software and fresh system configurations. With each release, there are gaps that the attacker can attack. These can be examples of code weakness, which are already documented or may be fresh.
Recent configurations are a risk of errors or fresh ways of attack. We could not imagine ransomware software when we were dealing with attacks of refusal to services. Now we juggle espionage and ransomware software with IP [intellectual property] theft. All our critical infrastructure, including telecommunications and financial, health, urban, energy and water networks, are the target.
Fortunately, many efforts were devoted to defending critical infrastructure. We will have to translate this into products and services based on AI, which automatize some of these efforts. And of course, to continue to design smarter and smarter agents opposing, to keep us on our feet or lend a hand us in the practice of defending our cybercriminals.