So many in health care have such high hopes for artificial intelligence to do everything from defeating burnout clinicians to the development of medical research. The challenge related to AI in healthcare is not an excitement, there is a lot of it.
Rather, a key challenge for IT leaders is clarity – a clear vision of artificial intelligence of what it can achieve. Where does Ai actually provide Roi? What is unthreatening to place now? And how do you manage the risk, management and long -term value, without sweeping noise?
Dr. Justin Norden is a professor Stanford, a co -founder and general director of qualified health, who sells infrastructure for generative artificial intelligence in healthcare, which aims to provide technology, training and support for hospitals and healthcare systems to start generative artificial intelligence and securely scale it in various organizations.
Emphasis on the word caution
He warns leaders in healthcare organizations in order to identify ROI, security, risk, management and long -term value problems when approaching AI systems and tools. And he puts emphasis on the word caution.
“We are now almost two and a half years in the CHATGPT edition and although the noise around generative artificial intelligence in healthcare is louder than ever, it’s time to ask:” Where is the real roi? “-said Norden. “Despite all the emotions, even the most commonly accepted operate – documentation of the environment – did not bring consistent financial phrases between groups of suppliers. Some doctors love it, but the adoption is still confined and uneven.
“Meanwhile, dramatic headlines regarding AI exceeding doctors in the scope of diagnostic tasks attract attention, but they miss the heart: these cases of clinical use are not what will determine the influence of AI in the near future on health care,” he continued.
He claimed that the true AI value applies to healthcare surgery.
“That’s where we start seeing roi,” he said. “AI can now unlock insights based on unstructured data-most of what health care produces. Tasks such as quality reporting, improvement of work flows in the revenue cycle and simplification of patients’ range may not seem flashy, but they are necessary and time-consuming.
“And finally, he is able to automate what was buried in PDF files, faxes and clinical notes,” he continued. “These behind -the -scenes improvements may look small, but together they are a significant, scalable influence.”
Ideas from people closest to the labor
Many people in health and health care are generally looking for one “Killer application” to change everything. But the real transformation comes from hundreds – or thousands – tiny, practical cases of operate in everyday work – and the best ideas do not come from above, but from people closest to work, said Norden.
“Doctors and nurses are already using artificial intelligence – simply unofficially, on personal devices or by bypassing,” he noted. “This tells us two things: there is a demand and risk. The path forward is clear-we move artificial intelligence over the table. Keep it, in line with Hipaa and available so that we can transform this quiet revolution into lasting progress throughout the entire system.”
On the other hand, when it comes to the unthreatening implementation of artificial intelligence in today’s healthcare, Norden said that it is extremely crucial to start by recognizing what is not unthreatening – because that’s what many organizations are still exposed, regardless of whether they know or not. One of the most smoking problems is staff using personal AI accounts to process sensitive patients’ information – and this is more common than many are aware.
“Talk to leaders all over the country and you will hear everything:” We know that this is happening, but we look the other way “to” deal with it if this happens, “he said. “Some even act under a tranquil policy” don’t ask, don’t say “. But none of them is profitable long -term strategies. We saw it earlier with tracking Pixel and Google ads, in which the violations of privacy appeared, lawsuits are observed. The same is probably the same in the case of AI. Legal and reputational risks are too high.
“Another area that requires caution are public AI chat tools,” he contradicted. “While demos can be impressive, these systems are susceptible to” jailbreaking “. We have seen AI tools used to create inappropriate, harmful or even dangerous content, often completely bypassing the intended protection of systems in clinical conditions, which may cause that disinformation to data leaks or even harmful patients’ interactions. “
Watch out for open internet
He added that the risk increases exponentially when these models are connected to the open internet.
“Evil actors can plant malicious online content designed to influence AI behavior, creating serious threats to cybersecurity,” he said. “At best, this leads to headache. In the worst case, this may cause a violation of data or ransomware attacks that will stop entire systems.
“Safer forward path begins with the internal AI arrangement in safe environments in line with HIPAA with people in the loop,” he said. “This ensures that the data is securely used, and people still sign actions taken by AI systems. Early AI applications should focus on operating areas, such as improving the administrator’s tasks, improving work flows and reducing friction – areas offering roi without introducing clinical risk.
Norden also offers cautious advice when it comes to risk management, management and long -term value from AI without sweeping noise, today calling one of the biggest challenges in healthcare.
“There is a natural hesitation and rightly, considering how much it is still unknown,” he said. “That is why many healthcare systems got stuck in care mode – by launching pilots, implementing internal side projects and experimenting without a clear path.
“The transition from” it looks promising “to” it is unthreatening and scalable “, it begins with clear leadership and direction, what tools for use and how we should measure success before we start” – he continued. “What is difficult is now is now without a clear direction for our working strength, people turn to external public tools and under the use of the table. The working one should access these tools. “
Much more than safety
But security is not enough.
“Too often safer tools are also more awkward or less helpful, which pushes people back to public options,” he explained. “We need to make internal tools both unthreatening and really more valuable. This means embedding artificial intelligence with real flows and enriching them with internal data, so it is not only consistent, but also necessary.
“With the increase in use in the entire organization, management must also scale,” he continued. “This includes tracking, interaction control and user education-not to their police, but to conduct safe, responsible use. If someone tries to use artificial intelligence for high-risk tasks, such as drug dosage, we need systems to catch and improve this behavior early.”
He added that ultimately the long -term value is due to the structure of a repetitive, scalable process.
“This means structured pilots, performance thresholds and infrastructure that help managing teams follow and develop what works,” he said. “Thanks to strong tools, intelligent politicians and clear leadership priorities, we can transfer past experiments and to sustainable transformation throughout the system.”
Avoiding typical mistakes
How can hospitals and healthcare systems avoid common errors that stop progress? Norden said in different ways.
“At the moment, when we talk to healthcare leaders, we can see that most of the AI strategies fall into one of four buckets-waiting for the EHR seller will come up with something, prohibiting tools such as chatgpt, buying a point system, such as documentation of the environment or tries to build everything on their own,” Norden said. “All these approaches have some logic behind them, but they often miss a larger picture.
“You really need a clear, common vision in the organization that AI is coming, will change the way of healthcare, and we must now start preparing for this future,” he continued. “Without this entry, the teams work in silos, uncertain, where to focus and progress will get stuck.”
Another common trap is too much to do too much.
“We all saw systems chasing dozens of pilots with various suppliers, disseminating their time and resources,” said Norden. “The result is not sufficient adhesion in one place to affect a significant impact. What works better is to choose several areas with high priority in which artificial intelligence can change, immediate difference and investing in people with true support and leadership support.
“It’s about less, smarter plants and providing these teams tools, data and clarity they need to succeed,” he added. “This targeted approach builds a shoot and facilitates the scaling of what works.”
Don’t forget people
And finally, Norden said that you can’t talk about avoiding mistakes without talking about people.
“Most of our employees already use AI tools in their personal life and are increasingly bringing them to work,” he noted. “If we ignore it or try to close it, we lack a huge opportunity. We have to bend it, giving them unthreatening, unthreatening tools for experimenting and teaching them effective and responsible operate of AI.
“Education and training cannot be one-time; it must be a constant part of how we support our teams,” he summarized. “The future of artificial intelligence in healthcare is not only about technology – it is about enabling our people to use it well. When leadership brings everyone, then a real transformation happens.”
Watch now: grab the main brass ring of the AI officer – and working with the best brass