Doctors Studying Ways to Get Virtual Assistants Into the Exam Room

Nuance Communications has integrated its Dragon Medical Virtual Assistant with Epic Systems’ electronic health records. The work with natural language processing is showing promise in helping ease some burdens of working with EHR systems.


Dr. Yaa Kumah-Crystal, MD, MPH (pictured), is an Assistant Professor of Biomedical Informatics at the Vanderbilt University Medical Center in Nashville, Tenn., where she sees patients in pediatric endocrinology. She spends most of her time doing research in medical informatics and health IT, on “how to optimize and make the EHR [Electronic Health Record] less of a burden on end users and figuring out how to best leverage the technology and make patient care better.” In that role she is working with voice technology vendor Nuance Communications to integrate the Dragon Medical Virtual Assistant with Epic Systems’ EHR. The work with natural language processing (NLP) is showing promise in helping ease some burdens of working with EHR systems, but the work has a way to go before the system will start to make a real impact on patient care. Dr. Kumah-Crystal provides an update on the work in these interview excerpts.

eWEEK: Where are you with the work right now?

Dr. Kumah-Crystal: We’re developing a prototype for providers to help them prepare for seeing patients. We are capable of using the product in the exam rooms, but right now it’s mostly used in provider work rooms for preparation before seeing the patient. Working with Epic has been useful because they can help us scale some of the work we do to more users. But the way we’re building out our voice assistant tool is to be EHR agnostic wherever we can, using FHIR [Fast Health Interoperability Resources] API calls to do our fulfillment rather than relying on the Epic database or our own from the previous EHR.

eWEEK: How would you assess where you think most organizations are with respect to EHRs?

Dr. Kumah-Crystal: I think we’re better than were we were but still have a ways to go. The problem was that EHRs just translated paper process into the application and made electronic versions of the forms we were filling out. We didn’t take advantage of what the EHR could do. [Now] there’s a lot more structured information and standards that didn’t exist before that allows us to take advantage, and now we are looking beyond that to focus on the usability factors that were lost in meaningful use implementations.

eWEEK: That usability factor was about addressing the fact that doctors were looking at a computer screen rather than the patient?

Dr. Kumah-Crystal: Yes. I don’t blame my doctor when this happens, because that’s the way the computer is designed. Every minute you are looking at the screen is time you are not looking at the patient. But, it’s a requirement that the communication you have and the orders you are placing have to be done through this interface, and that certainly contributes to provider burnout, and patients themselves feeling like they’re not the most central part of the conversation. Scribes have shown to improve the workflow and free up the provider to have more engagement, but that’s a costly option and difficult to scale. If there’s a way to virtualize that or leverage some of the NLP tools we have and get them to the place where they could do the processing a scribe would do, that would be the ideal state that we are working toward.

eWEEK: Nuance has been in health care for a long time. What’s the difference between standard dictation tools already in use and what new voice technologies can do?

Dr. Kumah-Crystal: The new language models, once you program in a handful of ways to say it, can recognize the intent behind what you’re saying. You don’t have to worry about saying it the way the computer requires you to say it, but can speak naturally. Before, [voice] was robotic. From typing into a computer via command lines, then graphical user interfaces, and macros—it’s all leading up to a place where you act human. Instead of having a computer understand you, you can speak normally and the computer is better at understanding you.

eWEEK: How are you working to implement Dragon into a provider’s office or exam room?

Dr. Kumah-Crystal: The part we are focused on is the voice assistant component, being able to give commands and ask for summaries. There’s another whole space they’re working on that includes taking dialog and making summaries out of that. That’s not in the scope of what we are doing right now, but we are going to be integrating that in the future. Our main focus now is how to bring out the relevant information in the EHR and summarize that in a useful way, and  to present that to the providers so they can make decisions about patients. An example of that would be a doctor asking, “tell me about this patient.” There are lots of different elements that can be pulled from the patient’s record, name, age, gender, last visit information, diagnosis, recent test results. Think about it in terms of orienting the doctor toward the patient that is coming in, and helping the doctor figure out how to better serve this patient.

eWEEK: Does this also work the other way around? Where a doctor can add something to the record, or schedule them for another visit?

Dr. Kumah-Crystal: Eventually. That’s not in the scope of what we are doing now. What’s important with voice technology is the safety aspect. Before we can begin to “write back” into the EHR, we want to make sure that the interactions doctors are having with it make sense, and is understood well enough that people are comfortable with the technology. One of the things we are going to try is writing a note or reminder. When it comes to writing prescriptions, that’s a little more impactful. This year we will begin doing some studies about decision support and what’s the best way to deliver it. What’s the best way for a computer to say or tell the doctor [that the drug just prescribed is contraindicated]. No one really knows because we haven’t experienced those modalities before, so we have to study that, which will be a crucial part of voice adoption.

eWEEK: What is Nuance’s role here?

Dr. Kumah-Crystal: Nuance does the natural language processing and they give us access to their interface where we get to assign the language models for different intents and do fulfillment on our end. We have been able to get deep into the Nuance software itself to figure out the best ways to satisfy the different intents, and share that information with the wider community doing this type of research. We are trying to work on the utility of saying something out loud. What purpose does that serve? When would it be appropriate to have certain values spoken out loud [if the patient is in the room]? There are many different design considerations that we are trying to learn from. Our goal is to learn as much as we can and share that with the voice community so this product and this concept can get good fast.

eWEEK: What is your expectation for when this might start appearing in actual patient rooms?

Dr. Kumah-Crystal: In actual patient rooms, not long. The technology is all there. If we can throw more developers at it we can get something reasonably good in terms of summarizing labs and appointments, in the next six months. But in terms of what is really useful, I think there’s a correlation between how useful a piece of information is and how hard it is to extract from the charts. There’s a whole other level of NLP processing, data extrapolation and summarizing that we’re not doing yet. We’re just pulling structured data right now. When we get to that point it will be something everyone wants to use, but right now it seems like it’s going to be really useful for very specific use cases, and untie the provider from having to stare at the screen to hunt down a piece of information. In terms of the clinical encounter, instead of a clinical workstation, there would be a shared screen between the doctor and the patient and the commands would display there that we can both see. There are a lot of things in terms of work flow and dynamics that will have to be modified to take advantage of these.

Scot Petersen is a technology analyst at Ziff Brothers Investments, a private investment firm. He has an extensive background in the technology field. Prior to joining Ziff Brothers, Scot was the editorial director, Business Applications & Architecture, at TechTarget. Before that, he was the director, Editorial Operations, at Ziff Davis Enterprise. While at Ziff Davis Media, he was a writer and editor at eWEEK. No investment advice is offered in his blog. All duties are disclaimed. Scot works for a private investment firm, which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.

Scot Petersen

Scot Petersen

Scot Petersen is a technology analyst at Ziff Brothers Investments, a private investment firm. Prior to joining Ziff Brothers, Scot was the editorial director, Business Applications & Architecture,...