The renowned Dana-Farber Cancer Institute has built a secure and private exploratory environment to evaluate, test, and deploy huge language models for non-clinical applications such as clinical and basic research and surgery.
The vendor organization overcame governance, ethical, regulatory, and technical challenges and implemented a secure API to enable its developers to embed AI into their software applications. The organization also trained its employees on using the LLM correctly and safely, retraining and upskilling where necessary, and working to raise adoption.
Renato Umeton is the Director of AI Operations and Data Science Services at Dana-Farber Cancer Institute. He holds a Ph.D. in mathematics and computer science. I spoke with Umeton to talk about his work in artificial intelligence and to gain insight into his case study session on this topic at the HIMSS AI in Healthcare Forum, scheduled for September 5-6 in Boston. The session will focus on mitigating the risks of LLM in healthcare.
Q. What are the biggest opportunities – and challenges – for huge language models in healthcare today?
AND. The topic of the session is the private, secure, and HIPAA-compliant implementation of huge language models in healthcare, specifically for employees at the Dana-Farber Cancer Institute. The focus of the session is to discuss the challenges and lessons learned from integrating these advanced AI tools into research and operational tasks while explicitly excluding direct clinical care (e.g., treatment, diagnosis, directing, or informing clinical management).
This is critical in today’s healthcare landscape as AI permeates more and more medical software products, and everyone from doctors to patients to staff can benefit from understanding how to safely and effectively leverage this potential.
In the brief term, we are pursuing apply cases that improve efficiency. In the long term, we hope that better data and AI will lead to improved practices and patient outcomes.
The process of implementing GPT-4 involved significant ethical, legal, regulatory and technical challenges.
By sharing our experiences and the framework we have developed for AI implementation, we aim to provide insight for other healthcare organizations considering similar implementations. This is especially critical as the industry grapples with the dual imperatives of innovation and patient safety, making it crucial to establish solid governance and guidelines for AI apply.
Q. What is an example of your work in action in your organization?
AND. The basic technology discussed in our session is GPT4DFCIprivate, secure, HIPAA-compliant generative AI tool based on GPT-4 models. You can think of GPT-4o as the central layer of this application. The next layers support AI models that analyze all data coming in and out of the models to filter out unsafe content, such as malicious language or copyrighted software code.
Beyond that, there is a layer that records all of our users using this technology and allows for auditing. Finally, the outermost layer is A uncomplicated user interface similar to ChatGPT, with links to training materials and a user ticketing system, as well as a dedicated Wiki page where users can learn more.
This technology is used to support a variety of non-clinical tasks, such as extracting and searching for information in notes, reports and other documents, as well as automating repetitive tasks and streamlining administrative documentation.
Q. What lessons do you hope session participants will learn and be able to apply in their service organizations?
AND. First, we hope that participants will understand the importance of establishing a comprehensive AI governance framework for the careful implementation of AI technologies in healthcare. This includes establishing a multidisciplinary governance committee, such as our AI Governance Committee, to oversee implementation, address ethical issues, and ensure compliance with evolving regulations.
By engaging diverse stakeholders, including legal, clinical, research, technical and bioethical experts, as well as patients, organizations can create policies that balance innovation with patient safety and data privacy.
Second, we aim to have participants recognize the value of phased and controlled implementation of AI technologies. Our experience with GPT4DFCI highlights the potential benefits of limiting clinical apply of AI to IRB-approved clinical trials and institute-approved pilots.
This approach allows for iterative improvements based on lessons learned from controlled studies and helps identify and resolve potential issues early. When it comes to nonclinical apply cases, there is significant value in providing comprehensive training and support for users so they can learn from each other to apply the technology effectively and responsibly.
We believe that by adopting a cautious and phased AI implementation strategy, other organizations can maximize the benefits of AI while minimizing the associated risks.
