Sunday, January 26, 2025

Ai Ai evaluation efforts are uneven in American hospitals

Share

While about two -thirds of American hospitals employ predictive models supported by AI, only 44% of hospitals assess these models for bias, increasing the concerns about justice in the care of the patient.

These were the results Last examination Carried by the University of Minnesota School of Public Health and published in health matters that analyzed data from 2425 hospitals throughout the country.

The study emphasizes the differences in AI adoption, noting that hospitals with larger financial resources and technical knowledge are better prepared to develop and evaluate AI tools compared to malnourished facilities.

The report also stated that hospitals primarily employ AI tools to predict hospital health trajectories, identify outpatient patients and improve planning.

Umn School of Public Health Assistor Paige Nong explained that one of the key questions currently conducting its research is how hospitals without extensive financial resources or technical knowledge can ensure that AI tools they accept are adapted to specific needs their patient population.

“We do not want these hospitals to get stuck in two bad options that use artificial intelligence without the necessary assessment and supervision, or without using it, although it can help in some serious organizational challenges,” she said.

She said that the employ of information provided in predictive labels of models described by the Secretary Assistant for Technology Policy in the HTI-1 rule is one step that organizations can take.

These labels provide hospitals with key information so that even if they cannot build models to order for their patients’ populations, they can be the key consumers of available tools.

“Even if this information is not easily accessible, they can and should ask their suppliers for this information,” said Nong.

She admitted that there is a lot of space to improve when it comes to bias.

“First of all, conducting a local assessment that we discuss in the article is a valuable step to ensure that AI tools work well for all patients,” she said. “Secondly, looking at the predictors who drive production is helpful.”

If organizations see that predators can be biased – such as religious income or identity, they can avoid these tools.

She added careful thinking about what the results of the tool mean for patients.

“If the model predicts that the omitted meetings, for example, how to make a decision through this tool around this tool can be honest and ethical, and not consolidating prejudices?” She said.

Nong said she was excited, seeing excited, seeing how healthcare employees can fill the digital division into well -financed hospitals and insufficient resource when it comes to the ability to accept and evaluate AI.

“On the politics side, we describe various examples of valuable cooperation and partnerships in the article, such as regional extension centers, security organizations of AHRQ and other patients,” she said.

She noticed that the AI ​​Health partnership is one group that is trying to perform this kind of technical support.

“On the training side, IT specialists can engage with their communities and professional associations or networks to identify the needs of organizing malnourished care and provide important information and support,” said Nong.

Latest Posts

More News