Etay Maor, the main safety strategist at Cato Networks, says on Wednesday at Himss25.
Photo: Jeff Lagasse/Healthcare Finance News
Las Vegas – artificial intelligence in healthcare is associated with a lot of noise, but there are also many fears – how it can be used to attack and how it can be abused.
Etay Maor, the main safety strategist at Cato Networks, said on Wednesday at Himss25 in Las Vegas, that hospital leaders and clinicists should be aware of the potential risk and traps of artificial intelligence, especially in relation to potential hacking and fraud.
“I think that artificial intelligence is not very close to replacing us completely,” said Maor. “However, those who know how to use AI will replace those who do not know how to use AI.”
One of the main problems, in healthcare or any other industry, is that the bar has been reduced to potential hazard entities. Once someone had to have deep knowledge about coding and hacking to attack computer systems. Then it became possible to buy malware from the actors of the threat in the dim network. And then criminal services became popular in the dim network, and companies began to offer attacks for rent.
The strap has been lowered with each of these stages. But now the bar is at the lowest level, because malicious actors can now have artificial intelligence to do a filthy job for them.
The key to hospital and clinician leaders is the employment of staff with deep knowledge about manipulating the current AI models. Hackers, said Maor, looking for gaps for attack.
One of the common methods they operate is called poisoning with feedback when they are deliberately incorrectly managed by generative AI models such as chatgpt. At the root, it is a basic tactic: when the model generates the answer to the question or request, the actuator simply tells the model that it is bad, or introduces suggestions or corrections to the answer that “confuses” AI. Because the users of these models basically “train” their, a malicious actor can mislead him.
This may be in the form of text or images. Maor shared the history history in which he sent a photo of London to Chatgpt and asked to describe the image. She gave a senseless answer, because a very diminutive text set in the picture, naked to the human eye, but legible by AI, said.
Many healthcare leaders willingly accepted AI because of the perceived benefits – according to Medical economyOne of the most significant of these benefits is better diagnostic speed and accuracy, which can make it easier for suppliers to diagnose and treat diseases. AI can be used to analyze x -rays, MRI scans and other medical images, for example to identify patterns and anomalies that a person could miss.
However, medical economics indicated that there is a potential risk, especially when it comes to security and privacy. One of the biggest threats is the potential of data violation, because gigantic amounts of patients are often the goals for cyber criminals. Other types of unique AI attacks include feedback poisoning and the extraction of the model in which the opponent can extract enough information about the algorithm to create a replacement model.
Meor advised the leaders of hospitals and AI teams to be vigilant and remain one step ahead of technology.
“If you don’t know how to use artificial intelligence, those who do it will benefit,” he said.
He is the editor of Newscare Finance News.
E -Mail: jlagasse@himss.org
Newscare Finance News to Him Media publication.