Despite fears that remote health assessment and diagnostic tools will increase medical malpractice liability — particularly in cases of suicide ideation — the adoption and implementation of a new generation of remote assessment and treatment plans during the pandemic are not only bridging the mental health care gap in formerly underserved communities across the U.S. — it’s lowering clinicians’ exposure to civil liability.
Malpractice liability in the U.S. hinges on the concept of “standards of care.” Essentially, we expect clinicians to provide the basics of competent healthcare. This includes an accurate diagnosis and treatment plan, responding to changing information, monitoring drug interactions, and providing professional-level care to every patient.
But the legal system doesn’t expect clinicians to be able to predict the future or prevent every negative outcome, including suicide.
Modern assessment tools and treatment plans are getting better at predicting risk by using a variety of assessments like PHQ-9 and artificial intelligence to identify suicidal thoughts in populations. But they still can’t predict whether a particular patient will attempt suicide, at least not 100% of the time.
This is why malpractice lawsuits over suicide ideation are so rare.
Yet despite that fact, many clinicians still fear the perceived risk of exposure and increased legal liability that implementing telehealth assessments might incur, especially with the amount of data made readily available to them. And while that might seem logical, the opposite is actually true.
Gathering more information about your patients — even remotely — will only reduce your risk of liability thanks to a legal concept known as “foreseeability”,
According to Innovations in Clinical Neuroscience, foreseeability becomes an important factor in “claims based on patient suicide, when recurring problems with care may increase the probability of a lawsuit being filed following suicide.” This might include inadequate upfront suicide risk assessment or insufficient documentation.
In this context, liability is only assumed when a clinician fails to perform a risk assessment or take the appropriate actions after that assessment. They’re not (typically) held liable for simply being made aware of depression or thoughts of self-harm from a patient, but for neglecting to properly assess, document, and treat these signs — often for weeks, months, or even years.
The important point is that suicide malpractice suits are often less about the quality of assessments and treatment, but the infrequency or complete lack of assessments and reasonable adjustments to care over long periods of time. This is incredibly relevant when we consider that up to 45% of individuals who died by suicide had contact with their primary care provider within one month of suicide.
Remote screening apps, intelligent assessments, and data-driven insights, customized automatic alerts, and regular self-reporting tools can help doctors with limited bandwidth track and assess risk in larger patient populations.
For many people, remote behavioral health care is one of the only feasible options for receiving timely, personalized support. The financial and geographical constraints for traditional in-person care are simply too high.
Worse, geography isn’t the only barrier to care. Stigmas still surround mental health treatment, limiting access to care for millions.
Likewise, clinicians’ scarce time and limited resources, especially in underserved communities, simply cannot scale to meet this swelling demand using traditional in-person methods. The result is more likely sub-standard, potentially negligent care, that exposes practitioners to the risk of malpractice lawsuits, especially in cases of suicide.
The solution to this legal “exposure” is not gathering less information about your patients. It’s gathering more and using it to further inform treatment plans.
One concern hospitals and care facilities have with remote monitoring is that it could present an opportunity to detect a potential crisis in near real-time but without the ability to respond. “It’s two o’clock in the morning and a patient reveals thoughts of self-harm, now what?”
For example, when a patient calls an office and leaves a voicemail, they’re notified that the messages are not monitored in real-time and to call 911 or go to the emergency department for an urgent crisis. Therefore, if this same patient calls to endorse thoughts of self-harm on the voicemail of the clinic, they will not be followed up until those messages are checked per organizational policy. Where are the demands to unplug phone lines out of fear of liability? It’s clear that having the policy in place and following through to check the voicemails is what protects the practice.
Like a voicemail, patients are notified that providers are not constantly monitoring in real-time. However, using technology allows us to nudge patients to utilize appropriate crisis resources immediately and provide additional information to patients when they need it the most, such as myths of calling the crisis resource lines. Patients benefit from this immediate support, meanwhile, clinics and practices can reach out within the next 24-48 hours to conduct a wellness check, provide a referral, or activate an emergency response.
Much like a voicemail, if this policy is followed the practice is protected and a life may have been saved.
It may seem intuitive to avoid telehealth or remote care to avoid increased liability and malpractice exposure. But remote behavioral health assessment tools, real-time data-driven insights, and adaptive treatment plans are actually the things that protect clinicians from civil liability in instances of patient suicide.
Ignoring patients or their symptoms doesn’t lead to better care, and it certainly won’t limit liability. We live in an era where technology can identify risk factors and help clinicians intervene in a timely manner. It’s time to embrace this reality and change the course for so many struggling patients.
Photo: Wacharaphong, Getty Images