Friday, March 13, 2026

Generative AI is this CISO’s ‘really eager intern’

Share

In the first installment of this in-depth interview, Mass General Brigham Chief Information Security Officer David Heaney explained defensive and offensive applications of AI in healthcare. He said that understanding the environment, knowing where the controls are and being great at the basics is much more critical when AI is involved.

Today, Heaney shares best practices that healthcare CIOs and CIOs can implement to secure their employ of AI, how his team is using it, how to support his team catch up on AI security, the human side of AI and cybersecurity, and the types of AI he’s using to combat cyberattacks.

Q. What are some best practices that healthcare CISOs and CIOs can employ to safeguard the employ of AI? And how are you and your team using them at Mass General Brigham?

AND. It’s critical to start with the way this question is framed, which is to understand that the capabilities of AI will drive incredible changes in the way we care for patients, discover fresh approaches, and much more in our industry.

It’s really about how do we support that and how do we support secure that. As I mentioned in part one, it’s really critical to make sure that we’re doing the basics right. So if there’s an AI service that’s using our data or running in our environment, we have the same requirements for risk assessment, business partner agreements, any other legal agreements that we would have with non-AI services.

Because at some level we’re talking about another application that needs to be controlled just like any other application in the environment, including restrictions on the employ of unapproved applications. And none of that is to say that there aren’t AI-specific issues that we want to address, and there are a few that come to mind. There are certainly additional issues around data usage beyond the standard legal agreements that I just mentioned.

For example, do you want your organization’s data to be used to train your vendor’s AI models down the line? The security of the AI ​​model itself is critical. Organizations need to consider options for ongoing model validation to ensure it provides right results in all scenarios, and this can be part of the AI ​​governance I mentioned in Part One.

There’s also adversarial model testing. If we put in bad input, does that change the way the output comes out? And then one of the areas of the fundamentals that I’ve actually seen change a little bit in terms of its importance in this environment is the ease of adopting so many of these tools.

Example: Look at meetings with note-taking services like Otter AI or Read AI, and there are so many others. But these services are incentivized to make adoption uncomplicated and hassle-free, and they do a great job of that.

While the concerns about how these services are used and what data can be accessed, etc., don’t change, the combination of how uncomplicated it is for our end users to adopt them, and honestly, the just nippy nature of this and a few other apps, makes it worth focusing on how we implement different apps, especially AI-based ones.

Q. How quick are you evolving your team in terms of securing against and against AI? What is the human factor here?

AND. That’s huge. And one of my most critical values ​​for my security team is curiosity. I would say that’s the one skill that drives everything we do in cybersecurity. That thing where you see something that’s a little witty and you say, “I wonder why that happened?” And you start digging.

That’s the beginning of pretty much every improvement that we make in the industry. So for that, a huge part of the answer is having curious team members who are excited about it and want to learn about it on their own. And just go out and play with some of these tools.

I try to lead by example in this area by sharing how I have used different tools to make my work easier. But nothing can replace this curiosity. At MGB, in our digital team, we try to dedicate one day a month to learning and provide access to different training services with relevant content in this area. But the challenge is that technology is changing faster than training can keep up.

So there’s really no substitute for just getting out there and having fun with technology. But also, perhaps with a bit of irony, one of my favorites generative AI applications are learning. One of the things I do is employ a prompt that says something like, “Create a table of contents for a book titled X, where X is any topic I want to learn about.” I usually also include a little bit of information about the author and the purpose of the book.

This creates a great outline of how to study the topic. And then you can ask your AI friend, “Hey, can you expand on chapter one? And what does that mean?” Or potentially go to other sources or other forums to find relevant content there.

Q. What are some of the types of AI that you employ, without giving away any secrets, to combat cyberattacks? Maybe you could explain more about how these types of AI are supposed to work and why you like them?

AND. Our overall digital strategy at MGB is really focused on leveraging platforms from our technology providers. To pick up a little bit on the question about the provider in part one, we’re focused on working with those companies to develop the most valuable capabilities, many of which will be AI-driven.

And just to give you a picture of what that looks like, at least in broad strokes, not to give you the golden goose, so to speak, our endpoint protection tools employ different AI algorithms to identify potentially malicious behavior, and then they all send logs from all of those endpoints to a central collection point where there’s a combination of rules-based and AI-based analysis that looks for broader trends.

So not just in one system, but across the entire environment. Are there trends that indicate maybe increased risk? We have the Identity Governance Suite, and that’s a tool that’s used to provide access to grant and remove access in the environment. And that suite of tools has different built-in capabilities to identify potential risk and to review combinations of access that might already be in place and even review access requests as they come in to prevent that access from being granted in the first place.

So it’s the world of the platforms themselves and the technologies that are built into them. But beyond that, going back to how we can employ generative AI in some of these areas, we’re using it to accelerate all sorts of tasks that we used to do manually.

The team gained, I can’t give you the exact number, but I will say that it saved a lot of time using generative AI to write custom scripts for triage, for forensics, for systems remediation. It’s not perfect. AI allows us to, I don’t know, get 80% done, but our analysts are finalizing the script and doing it a lot faster than if they were running it or creating it from scratch.

Likewise, we employ some of these AI tools to create queries that feed into our other tools. Our junior analysts are much faster because we give them access to these tools to support them employ the various other technologies that we have deployed more effectively.

Our senior analysts are simply more productive. They already know how to do many of these things, but it’s always better to start from 80% than from zero.

Basically, I describe him as my really eager intern. I can ask him to do anything and he’ll come back with something between a really good starting point and a potentially great and complete answer. But I certainly wouldn’t go and employ that answer without doing my own checking and finishing it first.

Latest Posts

More News