Health

AI study finds 50% of patient notes duplicated

Fitness & Health:

A new health informatics study found that clinical care documentation results in a high prevalence of text duplication and that systemic hazards require systemic interventions to fix.

WHY IT MATTERS
Earlier this year, a team led by researchers from the University of Pennsylvania Perelman School of Medicine in Philadelphia used artificial intelligence to analyze all inpatient and outpatient notes written within the UPenn Health System from January 1, 2015, through December 31, 2020.

The investigators quantified the text duplicated from a different author versus text copied from the same author to find out how much duplication is present in the electronic health record and why.

Ads

Of the total notes for more than 1.96 million unique patients (104,456,653), 50.1% of the words were duplicated from prior notes written about the same patient (32,991,489,889 of 16,52, 85, 210 total words).

The researchers also found that the rate of duplication increased year-over-year, from 33% for notes written in 2015 to 54.2% for notes written in 2020.

Further, of the total duplicated text found in the study, 54.1% of copied notes were written by the same author and 45.9% from a different author. And the more notes in a record, the more duplicative it was – nearly 60%.

“Duplicate text casts doubt on the veracity of all information in the medical record, making it difficult to find and verify information in day-to-day clinical work,” the Penn Medicine researchers said in the investigation abstract for Prevalence and Sources of Duplicate Information in the Electronic Medical Record, published in JAMA on September 26.

For the cross-sectional analysis, they worked with TrekIT Health, Inc, CareAlign, a Philadelphia start-up with a clinical workflow platform that connects with any EHR, and Jamaica Plain, Massachusetts-based River Records, an automated data processing firm. 

RELATED:  WHO says 99% of world's population breathes poor-quality air

Launched and operated by four doctors in residency, River Records aims to address the “historically unquestioned concepts in clinical documentation” through natural language processing and deep learning. The firm’s website also indicates the firm’s AI model condenses data collection and processing efforts into a few steps and its software can deliver a user-friendly interface to interact with the findings.

The AI analysis used a sliding window of 10 adjacent words to identify spans of copied text. If anything, the study underestimates duplicity because the analysis could not capture summarized or paraphrased duplicate information, according to the abstract.

The note paradigm for documentation should be further examined by weighing scatter, say the researchers. Patient information can be scattered across hundreds of separate notes. 

As part of the analysis, the researchers also quantified scatter based on the number of words per note and plotted them against duplication values. They found that progress and assessment notes had relatively low scatter and high duplication while operative notes have low scatter and low duplication.

“For instance, telephone encounter notes have, on average, 42 words of novel text per note; this would mean that a clinician reading the record would have to view approximately 10 separate notes to get 500 words of novel text, an extremely scattered set of notes requiring many clicks to navigate,” they explained.

Overall, the difficulty of finding clinical information living across numerous locations leads to wasted time retrieving data, or worse, missed information because clinicians lack time to adequately search the EHR, the researchers noted.

RELATED:  Tribes: Settlement in opioids case will bolster healing

But rather than just editing old documents, clinical note authors need to continuously create copies of old notes and add to them – an adapted behavior of modern EHRs.

Any unilateral restriction on copying and pasting in EHRs would contribute to scatter, according to the researchers. Thus, the study concludes that the time-based and author-based organization of modern EHRs drive the prevalence of duplicity in the EHR note paradigm.

THE LARGER TREND
Duplicate records could contain incomplete or outdated information that can affect the quality of care when clinicians make care decisions without updated information, such as recent lab results or new medications.

Sanford Health, a system based in Sioux Falls, South Dakota, with more than 46 hospitals tackled “note bloat” by creating a standardized note form in their Epic electronic health record system. The form encourages providers to document everything they need to – and nothing they don’t, according to Dr. Roxana Lupu, CMO of Sanford Health.

It started from the basic principle that a note is a form of communication, not a review tool for the note’s author, she explained to Healthcare IT News last year.

“It was important to keep in mind that providers were not only writing the note for themselves but for others. We wanted the assessment and plan to be the most prominent part of the note, as that was the reason for reviewing notes,” she said.

Using natural language processing and deep learning to extract clinical insights from free text notes may help to improve data accessibility, patient care, population health and to even alleviate physician burnout.

The role of AI in solving duplicate patient records can result in better real-time interoperability results, said Gregg Church, president of 4medica. He has shared that many provider organizations he has worked with have patient record duplication rates as high as 30%.

RELATED:  New Zealand says it won't use lockdowns when omicron spreads

Looking at the rapid increase of clinical labs during the pandemic COVID-19 as a use case, Church told Healthcare IT News earlier this year that due in part to paper requisitions – which were as high as 50% for some labs – there could be three or four records for an individual, resulting in clinical risk and inefficient billing.

But by applying machine learning prediction capabilities to recognize and resolve records, for example, data is normalized and reconciled while systems stay online.

He said the Idaho Health Data Exchange now uses such an AI model to merge data without human intervention and it reduced their patient record duplication rate from 30% to 1%.

“You can’t run a successful business if you don’t know if you have actually got the right information, the right clinical information going back on the right individual,” he said. 

ON THE RECORD
“The note paradigm for documentation should be further examined as a major cause of duplication and scatter, and alternative paradigms should be evaluated,” the researchers concluded.

Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]

Healthcare IT News is a HIMSS publication.

DISCLAIMER:-If article is on fitness, health tips, beauty, tips-tricks care like recommendation, then check for DISCLAIMER in T&C.

Health News Today & Latest Medical News More Updates

Today News || Latest News || World News || US Politics || Health News || Technology News || Education News

Source

Tags
Show More

Related Articles

Back to top button
Close