Thursday, December 26, 2024

A beginner’s guide to ‘instafraud’

Share

A recent investigation by The Wall Street Journal revealed this. insurers collected a staggering $50 billion from Medicare for diseases that no doctor actually treated.

Perhaps one of the most disturbing aspects of this fraud explosion is the emergence of what industry insiders call “instafraud” – a practice in which artificial intelligence, particularly large-language models, is used to generate false or exaggerated medical records.

This AI-based verification can immediately generate significantly larger amounts per patient per year by fabricating or modifying diagnoses that were never made by a healthcare provider.

We interviewed Medicomp CEO David Lareau to discuss the double-edged sword of artificial intelligence technologies, which have enormous potential to transform the industry – but can also be used by bad actors to create documentation to support coded diagnoses.

We talked to Lareau about instafraud, the role that large-language AI models play, how to combat instafraud, and what he would tell his colleagues and senior management in hospitals and health systems about executives who are not as confident in AI because of things such as Instagram scams.

Q. Please describe in detail what Instagram fraud is, how it works and who exactly benefits from it.

AND. Our medical director, Dr. Jay Anders, introduced me to the concept of insta fraud in reference to falsely inflating a patient’s risk adjustment results, sometimes by using enormous language models to create visit records containing diagnoses for conditions that the patient does not actually have, but for which the LLM can generate credible notes that are not true.

After completing the engineering crash course, Dr. Anders learned how uncomplicated it is to send a list of diagnoses to LLM and receive a full note that purports to confirm the diagnoses, without any evidence or investigation from the provider. Our concern is that it will be too uncomplicated and lucrative for providers and insurance companies to resist using it to generate additional revenue.

We have experience with unscrupulous individuals and companies using technology to “game the system.” Our first such meeting occurred when the Evaluation and Management (E&M) guidelines were introduced in 1997 and potential users asked, “Can you tell me one or two additional data elements I would need to enter to advance to the next level of the meeting? Then I can just add them to the note and that will increase the payments.”

Recently, people have been asking how they can apply AI to “suspect” additional diagnoses to obtain higher RAF scores, regardless of whether the patient has the disease. This approach is much more common than using AI to check that documentation is complete and correct for each diagnosis.

It is not only through the apply of artificial intelligence that companies and service providers are committing fraud, but also by implementing policies designed to “find” potential diagnoses that a patient does not have and include them in the records. For example, if a home health care provider asks a patient if they ever feel like getting out of bed in the morning and the answer is “yes,” they may make a diagnosis of depression, which qualifies for a higher RAF score.

Who doesn’t, sometimes doesn’t feel like getting out of bed. However, without proper analysis and diagnosis of other symptoms that indicate depression, a diagnosis of depression is potentially false.

Q. What role do enormous AI language models play? How do criminals obtain LLMs and have the data to support the work the LLMs do?

AND. LLMs have become a major part of the Internet fraud phenomenon in the Medicare Advantage system. These sophisticated artificial intelligence models are being used to generate false or exaggerated medical records on an alarming scale and at an alarming rate.

LLMs excel in processing and modifying huge amounts of patient datacreating compelling, yet fabricated, medical narratives that may be tough to distinguish from real data. This feature allows for the immediate generation of false diagnoses that can result in up to $10,000 more per patient per year in incorrect payments.

To be clear, people using LLM and data to commit fraud on Instagram are not typical “criminals.”

In fact, the main perpetrators of this technology-based fraud are insurance companies, which are likely to exploit their existing access to extensive patient data as part of their normal operations. They can apply commercially available artificial intelligence systems that are becoming more available, or potentially develop their own systems tailored for this purpose.

This raises earnest concerns about the misuse of patient data and the ethical implications of implementing AI in healthcare settings.

Q. How can you fight fraud on Instagram? And who is responsible for leading the fights?

AND. Responsibility for combating fraud is distributed among the various stakeholders. Regulators and policymakers must implement stronger supervision and penalties to discourage unfair behavior. Healthcare providers play a key role in confirming diagnoses and challenging false documents. Technology developers have a responsibility to create ethical AI systems with appropriate safeguards built in.

Insurance companies must commit to apply AI responsibly and transparently, prioritizing patient care over profit. Auditors and researchers play a key role in detecting and reporting fraudulent practices, providing a key line of defense against fraud on Instagram.

Ultimately, CMS is responsible for administering the Medicare Advantage program and must be more proactive in both detecting fraud and in detecting insurance fraud by holding companies and individuals responsible for insurance fraud committed by their organizations.

Tools are available to review charts and codes for fraud, but without earnest consequences for those overseeing and committing fraud, enforcement efforts will be insufficient and financial penalties will continue to be seen as a normal cost of doing business.

The first step was to create a whistleblower program for people reporting insurance fraud. But unless there are very earnest personal consequences – including possible prison time – the costs of Medicare Advantage fraud will continue to rise.

As an example of how this can be achieved, consider the Sarbanes-Oxley Act of 2002, which requires CEOs and CFOs to certify their organizations’ financial statements. These directors can face severe penalties if they certify that the company’s accounts are true when they are not – ranging from prison terms of up to five years, hefty financial penalties and other disciplinary actions such as civil and criminal proceedings. This increased the risk for those who would mislead investors and the public.

A similar requirement for those administering Medicare reimbursement policies and procedures at health care facilities, coupled with whistleblower programs, could provide a more proactive approach to preventing intentional fraud, rather than merely trying to detect it after the fact.

Q. What would you tell your colleagues and senior management in hospitals and health systems who tell you that artificial intelligence is a double-edged sword, even though they are not so sure about it?

AND. To colleagues and senior executives concerned about the dual nature of AI in healthcare, there are a few key points to highlight. Artificial intelligence should be seen as a tool to augment, not replace, human knowledge. The concept of “Dr. LLM” is not only wrong, but potentially perilous because it ignores irreplaceable aspects of human health care such as empathy, intuition, and convoluted decision-making.

A balanced approach is needed that leverages both the computational power of artificial intelligence and the differential judgment of healthcare professionals. This involves implementing technology-based guardrails combined with human collaboration to reduce errors and build trust in artificial intelligence systems. The focus should be on using AI to improve care delivery, not just to maximize billing or streamline administrative processes.

Healthcare organizations should implement technologies that enable the productive, effective and trusted clinical application of LLM, but always in a way that collaborates with clinicians rather than trying to replace them. When implementing AI in healthcare settings, it is critical to recognize the need for tough validation and trust-building measures. This includes limpid processes, regular audits and clear communication about the apply of AI in patient care.

Ultimately, AI should be seen as a powerful tool to enhance human decision-making, not as a replacement. By adopting this perspective, healthcare organizations can leverage the benefits of AI while mitigating its risks, leading to better patient outcomes and a more productive, ethical healthcare system.

Latest Posts

More News