Friday, March 13, 2026

Texas Attorney General Settles Lawsuit With Clinical GenAI Company

Share

Texas Attorney General Ken Paxton announced a settlement with Dallas-based AI developer Pieces Technologies. The settlement resolves allegations that the company’s generative AI tools put patient safety at risk by overpromising accuracy.

WHY IS THIS IMPORTANT
The Irving, Texas-based company uses generative AI to summarize real-time data from electronic health records about patient conditions and treatments. Its software is used in at least four hospitals in the state, according to the settlement.

According to the company, the “serious hallucination rate” is less than one in 100,000 Settlement Agreement.

While Pieces has denied any wrongdoing or liability and says it did not violate the Texas Dishonest Trade Practices-Consumer Protection Act, the attorney general’s settlement states that the company must “clearly and conspicuously disclose” the meaning or definition of the metric and describe how it was calculated — or “engage an independent, third-party auditor to evaluate, measure, or justify the performance or characteristics of its products and services.”

Pieces agreed to abide by the terms of the settlement for five years, but said in an emailed statement Friday that the attorney general’s announcement misrepresented the voluntary compliance assurance he entered into..

“Arts strongly supports the need for additional oversight and regulation of clinical generative AI” and signed the agreement “as an opportunity to promote these conversations in good faith.”

BIGGER TREND

As AI – especially genAI – becomes more widely used in hospitals and healthcare systems, the challenges related to model accuracy and clarity become more severe, especially as they enter the clinical environment.

According to a novel study by the University of Massachusetts Amherst and Mendel, an AI company focused on AI-based hallucination detection, there are different types of hallucinations in AI-summarized medical records, report IN .

The researchers asked two immense language models—Open AI’s GPT-4o and Meta’s Llama-3—to generate medical summaries from 50 detailed medical notes. They found that GPT had 21 summaries with incorrect information and 50 with generalized information, while Llama had 19 errors and 47 generalizations.

As artificial intelligence (AI) tools that generate summaries from electronic health records and other medical data become more common, their reliability remains questionable.

“I think with generative AI, it’s not transparent, it’s not consistent, and it’s not yet robust,” Dr. John Halamka, president of the Mayo Clinic Platform, said last year. “So we have to be a little careful about the use cases that we choose.”

To better evaluate AI, the Mayo Clinic platform has developed a risk classification system to qualify algorithms before they are used externally.

Dr. Sonya Makhni, medical director of the platform and senior adjunct consultant in the Department of Internal Medicine at Mayo Clinic Hospital, explained that when considering the unthreatening exploit of AI, healthcare organizations “should consider how the AI ​​solution could impact clinical outcomes and what the potential risks are if the algorithm is incorrect or biased, or if the actions taken based on the algorithm are incorrect or biased.”

She said it is “the responsibility of both solution developers and end users to design an AI solution for risk in the best way possible.”

IN THE DOCUMENT

“AI companies offering products used in high-risk environments have an obligation to the public and their customers to be transparent about risks, limitations, and appropriate use,” Texas Attorney General Ken Paxton said in a statement about the Pieces Technologies settlement.

“Hospitals and other healthcare providers need to consider whether AI-based products are right for them and train their staff accordingly,” he added.

Latest Posts

More News