Friday, April 11, 2025

AI Expert doctor warns clinicians and performers: watch out for AI challenges

Share

Dr. Ronald Rodriguez has a unique title in healthcare. He is a professor for medical education and director of the first program in the MD/MS country in the field of double artificial intelligence at the University of Texas Health Science Center in San Antonio. The five -year double degree was released in 2023.

Rodriguez, which also has a doctorate in cell biology, is at the forefront of healthcare transformation by AI. He is aware of all positive ways in which artificial intelligence and automation already utilize healthcare. But he also sees some aspects of technology that should stop clinicians and IT directors.

This is part of the two -part interview with Rodriguez. He indicates here AI matters in healthcare, which require great care from professionals – even places where he thinks that professionals are wrong. The second part, soon, will be in video format and will discuss the breakthrough work of a doctor in AI Healthcare education.

Q: What do some clinicians are potentially doing bad today with AI generative tools and in how the hospital and healthcare system as well as other IT leaders and privacy to make sure that generative AI is properly used today?

AND. They do not protect effectively protected health information. Many commercial servers of vast languages ​​accept hints and data sent to their servers and utilize them for further training. In many cases, suppliers cut and paste aggregated clinical data and ask a vast language model for reorganization, summary and granting.

Unfortunately, many times PHI of the patient is included in laboratory reports, image reports or previous notes in a way that may not be easily noticeable to the supplier. Lack of elimination of Phi is a violation of HIPAA level 2. Each crime can potentially cause a separate fine. IT suppliers are able to say when Phi is cut and pasted and can warn users so that they do not. This is often happening.

However, currently most of these systems do not enforce compliance with these principles at the individual level. CIO and technology leaders in hospitals and healthcare systems can develop Phi removal tools that protect against these violations. Many LLM providers allow settings to prevent data from sharing; However, enforcing these settings is at your own discretion and is not provided.

Q: “You say:” Our current business model of the utilize of artificial intelligence is an ecosystem in which each prompt generates a cost based on the number of tokens. This incremental cost is currently modeled so that it is more likely to boost healthcare costs than to reduce them. ” Explain what you mean using a clear example that shows how costs are growing.

AND. Take DAX and Abridge, which are systems that assume the interaction of the Patient-Prophaver, transcribe interaction and sum up it for utilize in the note. The costs of these systems are based on actual utilize.

Systems greatly facilitate the lives of doctors, but there is no way to settle the patient for these additional costs through third -party payers. Instead, the only current option of paying these incremental costs is the suppliers to see more patients. Seeing more patients means that third -party providers will see more claims, which will ultimately be reflected in higher contributions or lower benefits or both.

Other systems that automate answering patients with LLMS can provide immediate feedback to patients with plain questions, but also a cost that is incremental. These costs are currently not settled, and therefore the result is pressure to see more patients.

Consider the hospital system implementing one of these AI generative tools to facilitate doctors in clinical documentation. One doctor may interact with the AI ​​engine many times to visit the patient.

Now multiply it for hundreds or thousands of doctors in the healthcare system acting on many changes and the cumulative cost of using AI quickly. Even if artificial intelligence improves documentation efficiency, the operating cost of recurrent AI queries can balance and even exceed savings from reduced administrative work.

So far, models of utilize of artificial intelligence are withdrawals and are not like established software with fixed license fees. The more the organization integrates artificial intelligence with daily work flows, the higher the financial burden becomes.

While hospitals and service providers negotiate profitable price structures, implement an inspection of utilize or develop internal AI systems, they may find themselves in a situation where AI adoption leads to the escalation of operating costs, not about the expected savings.

Q: You told me: “You should introduce security before we realize the real improvement of our general medical errors. Excessive rely on AI to correct errors can potentially cause different types of errors. ” Please develop the problem and, as according to it, discuss the required security.

AND. LLM is susceptible to hallucinations in certain situations. While some suppliers are very good in avoiding these situations – in fact we teach our students how to avoid such situations – many are not aware. The recent source of medical errors can be introduced if these errors are not caught. One way to protect this is the utilize of Ailm LLMS specific to agency.

These systems perform double information checks, confirm their truthfulness and utilize sophisticated methods to minimize errors. Such systems, however, are not built into free LLMS, such as Chatgpt or Claud AI. They will cost more in utilize and will require more investment in infrastructure.

Investments in infrastructure will be required to protect privacy, prevent unintentional disclosure of PHI and protect against predictable LLM and misunderstandings in scraping internet data used to prejudice LLM fundamental prejudice, will be required. Principles of conformity enforcement will also be needed.

Q: How should hospitals and healthcare systems develop appropriate ethical, guidelines and supervision policies?

AND. As the AI ​​technology develops quickly, the main medical organizations must provide documents of guidelines, and the principles of boiler boards that can facilitate institutions to accept the best practices. This can be achieved on several levels.

Participation in supervisory organizations and medical groups, such as AMA, AAMC and government supervisory committees, can facilitate solve a common ethical framework for data access and the utilize of AI data.

Watch now: Chief AI officer for children in Seattle says better technology results

Latest Posts

More News