As ambient technologies improve, additional use cases to leverage voice will emerge – that leaves us with the question of how patients and physicians are responding to voice-enabled tools in their healthcare encounters.
Originally published on Healthcare IT News
For a while now, we have been watching how voice-recognition based artificial intelligence tools can improve physician productivity, reduce burnout and improve the quality of the patient experience.
In addition, health systems have looked at voice-enabled transcriptions to identify reimbursable conditions identified during the diagnosis while ensuring that the diagnosis doesn’t miss any critical health indicators.
It is common knowledge that the most significant burden for many caregivers is documenting and annotating clinical encounters in electronic health record systems; Voice recognition is one of many tools that can alleviate the problem and reduce clinician workloads today.
Voice-enabled tools fall in the broad category of conversational AI, along with chatbots and other productivity and automation tools. However, the maturity of the tools, especially in a clinical context, is a long way off from the promise of the technology.
Users of the leading voice-recognition tools acknowledge that the technology delivers better caregiver productivity. However, they also point out that ambient artificial intelligence, or the underlying assumption about software that can make sense of a conversation and provide clinical decision support in real-time, is still very nascent.
According to Stephanie Lahr, CIO and CMIO of Monument Health, voice recognition in a clinical context is complex, and Doctor-Patient encounters are hard to capture in voice recognition software.
Dr. Lahr points out that even with leading technology providers for voice-based tools, a “person behind the curtain” often interprets the conversation and separates the clinical terminology from the overall conversation.
BJ Moore, CIO of Providence Health and a user of voice-recognition tools, states: How does an AI tool pluck out the necessary components from a doctor-patient encounter and add that to the EHR while ignoring the rest of the chitchat in the room?
Big tech and voice-recognition in healthcare
Big tech firms and startups alike are keen to expand voice recognition capabilities, considering the significant potential for voice-enabled tools to improve productivity and transform patient experiences.
Amazon, Google, and Apple have all invested in consumer-facing voice applications. Microsoft, whose Cortana platform has not made much of an impact in the marketplace, went ahead and acquired voice-technology software developer Nuance for nearly $20 billion in 2021. The move essentially implied that Microsoft was doubling down on its commitment to healthcare.
Amazon, the only other big tech firm with a voice-based offering for healthcare, has deployed Alexa services in several healthcare organizations. However, Alexa uses voice in a non-clinical (or quasi-clinical, depending on how you see it) context. Amazon’s recent announcements point to using voice-enablement in senior living communities and patients in hospitals to stay connected, informed, and entertained, much as consumers use Alexa today for general information.
While these solutions are not directly enabling clinicians and caregivers with diagnosis and treatment, they still have an essential role in care delivery. For example, voice assistants allow patients with routine, non-medical needs such as medication reminders, almost like having a healthcare attendant at home but using a voice assistant instead.
This brings us to Oracle, now a major new player in healthcare tech with its planned acquisition of Cerner. The news release made multiple mentions of voice-recognition software as a significant driver of productivity and reduced workloads for clinicians in the future.
While Oracle is not the first name that comes to mind when hospitals and health systems think of voice-recognition technology, its intent to bring voice-recognition technology to the Cerner platform to address clinical workloads is indicative of the perceived opportunity for voice technologies in healthcare. (Interestingly, Cerner is currently in collaboration with Nuance for its voice-enablement capabilities).
Ambient clinical computing is still in the early stages
Ambient computing using voice and other conversational interfaces is an exciting area, and several startups are getting into the field.
However, the progress towards more intelligent uses of voice recognition for clinical decision support has been slow. As mentioned earlier, separating the clinical terminology from other aspects of the conversation is a non-trivial challenge, implying that voice-recognition technology fits in well with some specialties but not others.
Regardless of the pace of adoption, most providers see a reduction in clinician burnout for those using it. Speech recognition software can transcribe encounters three times faster than a human typing into a clinical system, potentially freeing up a couple of hours a day for a typical caregiver who sees twenty to thirty patients a day.
We can only hope that we will see higher adoption rates as the technology gets better and needs less and less human involvement in reviewing the note. The entry of big tech into the voice space will hopefully result in significant new investments that will advance AI tools and intelligent automation of aspects such as coding and quality abstraction from encounter notes.
Today, a vital consideration for automation with voice-enablement and similar technologies is that it can help providers get through the high demand and low staffing levels across the board in healthcare – compounded by the “Great Resignation”.
Allowing clinicians to perform the most demanding in-person work with the highest and most complex patients also means using technology to assist those patients who don’t have high acuity needs – something we have seen work very effectively with telehealth and virtual consults in primary care and specialties such as behavioral health. As ambient technologies improve, additional use cases to leverage voice will emerge.
That leaves us with the question of how patients are responding to voice-enabled tools in their healthcare encounters. Early indications are that most patients accept ambient technologies because it provides an opportunity to regain the intimacy of their relationship with their provider, which was lost to the burdensome requirements of documentation in the EHR.
However, questions around data privacy and patient education about ambient technologies suggest that voice-enabled applications will need to tread carefully.
At a broad level, voice-recognition technology’s true potential lies in going beyond documentation and becoming an intelligent decision support tool through effective listening for clinical indicators and proactively supporting clinical decisions.
The level of integration between emerging technology tools and core clinical platforms such as EHR is a significant factor in increasing adoption rates. Today’s fundamental challenge for voice recognition in ambient computing is the same for AI applications in general in the healthcare context.
As with all new technologies, voice-enabled solutions will stand a better chance of broad adoption by addressing important and urgent problems in care delivery, which builds support from clinician owners and champions inside the organization.
There are many promising technologies emerging today that can impact healthcare. However, it is critical for clinicians and digital health leaders to recognize that no matter how good the tech, success can be elusive without organizational alignment and demonstrated performance.
New technologies also involve business process changes in addition to integration with core clinical platforms such as EHR and require effective change management approaches. Success requires alignment between the supplier of the technology and the healthcare organization’s internal stakeholders and developing an end-to-end view of solving problems.
Often this means paying close attention to understanding stated and unstated needs. When all these elements come together, the digital transformation of the healthcare sector can make giant leaps forward.