Stanford Study Questions Benefits of EHR Applications

 
 
By Brian T. Horowitz  |  Posted 2011-01-31 Email Print this article Print
 
 
 
 
 
 
 

A survey in the "Archives of Internal Medicine" by Stanford researchers is stirring debate on the benefits of electronic health record applications.

Researchers at Stanford University in California have released a report saying that electronic health records may not improve patient care, even if they include a feature called clinical decision support. CDS is a software function that provides alerts or reminders to doctors on how to care for patients. 

Dr. Randall S. Stafford, associate professor of medicine at the Stanford Prevention Research Center in California, and Max J. Romano, a former Stanford undergraduate and now Johns Hopkins medical student, conducted the study. The results were published online on Jan. 24 by "Archives of Internal Medicine," an American Medical Association journal.  

Electronic records were only helpful when providing diet consultations, according to the Stanford study, and CDS usage improved care only to avoid unnecessary electrocardiograms during routine examinations, the study found. 

Researchers compiled data from more than 250,000 patient visits to health care facilities between 2005 and 2007. Of an estimated 1.1 billion annual U.S. patient visits, EHRs were used in 30 percent of these visits, while software provided CDS support in 57 percent of these cases in which EHRs were used, according to the Stanford report. 

"Across a wide range of quality indicators, there was no consistent association between having those electronic tools available and providing better quality of care," Stafford told Reuters.  

"We need to be more realistic about what to expect from electronic health records," Stafford added. "I believe this study suggests that it is naive to believe that the simple presence of an electronic health record or even these systems with more advanced functionality will by themselves change the quality of care," he said. 

The study may have had some holes, however, according to industry experts, particularly because of the aging data. 

"Regardless of the study's validity, the strong reaction to the study is due in part to the fact that since the study concluded nearly four years ago, the technology driving clinical decision support has grown significantly more sophisticated, with the ability to deliver highly personalized alerts specific to a patient's unique medical history to the point of care," Rich Noffsinger, CEO of Anvita Health, wrote in an e-mail to eWEEK. 

Anvita's Insight CDS engine culls clinical data from EHRs and lab results. Insight also provides alerts in subsecond time in the event of adverse events such as drug reactions.

 Meanwhile, two National Institutes of Health researchers wrote that the EHR and CDS applications used for the Stanford study were "immature" and that the survey used incomplete patient data. 

"The results that Romano and Stafford found were dismal," Dr. Clement McDonald and Dr. Swapna Abhyankar wrote in another "Archives of Internal Medicine" piece. 

"The investigators observed no consistent difference in guideline adherence among providers who used paper medical records compared with those who used either an EHR alone or an EHR with CDS." 

Although the results of the study have some validity as far as EHR and CDS products in the past, they don't show the potential of future EHR implementations, according to Shahid Shah, CEO of IT consulting firm Netspective Communications and author of the Healthcare IT Guy blog.

"The numbers they've come up with are reasonable interpretations of the raw numbers available in the NAMCS [National Ambulatory Medical Care Survey]," Shah wrote in an e-mail to eWEEK.

"However, the important thing to realize is that the study results aren't tied to specific EHR implementations and didn't discuss the usability of EHR systems-even with slight improvements in CDS usability and algorithms you can see decent outcomes," he said. "So the study is valid in the data analysis, but the data analyzed doesn't have enough attributes nor nuance to be able to use it for predictions about future systems."

 


 
 
 
 
Brian T. Horowitz is a freelance technology and health writer as well as a copy editor. Brian has worked on the tech beat since 1996 and covered health care IT and rugged mobile computing for eWEEK since 2010. He has contributed to more than 20 publications, including Computer Shopper, Fast Company, FOXNews.com, More, NYSE Magazine, Parents, ScientificAmerican.com, USA Weekend and Womansday.com, as well as other consumer and trade publications. Brian holds a B.A. from Hofstra University in New York.

Follow him on Twitter: @bthorowitz

 
 
 
 
 
 
 

Submit a Comment

Loading Comments...

 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel