Background
Advance care planning (ACP) is an integral component of patient care. ACP emphasizes a patient’s values, goals, and preferences regarding their medical treatment.1 It has been shown to increase patient quality of life, decrease aggressive care at the end-of-life, and increase hospice utilization.2–6 Despite evidence supporting ACP for goal-concordant care, ACP is underutilized and often not initiated, especially in hospitalized patients.7,8 Further, documentation of patient wishes related to ACP in the electronic health record (EHR) is inconsistent, sparse, and difficult to find.9 This may be due to limited time and resources available to front-line physicians to initiate ACP and document these discussions.10,11
Another barrier to appropriate initiation of ACP is prognostication because providers often overestimate longevity and are too optimistic of patient prognoses.12,13 These inaccurate predictions can lead to delayed or missed end-of-life conversations. To improve prognostication, machine learning (ML) models can support the identification of patients who may benefit from ACP and nudge providers to complete and document ACP conversations.14–16 Identification of these patients by ML models can also assist with appropriate timing and initiation of ACP. ML mortality risk prediction models have been integrated into outpatient cancer clinics and palliative care consultation services to improve ACP documentation, however limited data is investigating ACP conversations by hospitalist providers.17,18
At our institution, we found that less than 5% of patients in 2017-18 had a documented ACP conversation in the EHR within the last six months of life. To improve ACP documentation, we developed a quality improvement (QI) intervention using an ML mortality risk prediction model. Based on the model’s output, general medicine inpatient providers were notified to consider ACP when patients at elevated risk of mortality were admitted.19 In this paper, we describe the intervention and evaluate documentation of ACP conversations in high-risk patients.
Methods
The primary aim of this study was to evaluate whether a notification based on an ML mortality model increases ACP documentation. Secondary aims sought to evaluate whether documentation of ACP was associated with differences in patient outcomes, including length of stay (LOS), 30-day readmission, intensive care unit (ICU) admissions, change in code status (new Do Not Attempt Resuscitation order (DNAR)), and discharge to hospice.
Study Design and Setting
This is a pre-post QI study at Duke University Hospital (DUH), a tertiary academic medical center, starting from January 2019 through April 2021. Patients were included if they were at least 18 years of age at admission, admitted to the general medicine service at DUH, and were deemed to have an elevated risk of mortality within 30 days of admission by a validated risk measurement tool developed by the Duke Institute for Health Innovation.19
The study included an intervention pilot phase from November 2019 through February 2020 that was initiated on general medicine hospitalist teams. In March 2020, the study was expanded to general medicine teaching service lines that included residents. The ML model identified patients appropriate for ACP by predicting their risk of mortality. Patients were classified into the categories of “low”, “medium”, “high”, or “critical” for mortality at the time of admission (Supplemental Table 1).20
Once a patient was identified as medium or higher risk, the patient was screened for exclusion criteria, listed below, by the QI team administrator reviewing the model dashboard. Providers were then notified via text page and a templated email that they were caring for a patient that may benefit from ACP. After March 2020, the population was refined to general medicine patients with a 30-day mortality risk classification of “medium,” “high,” or “critical,” and a 6-month mortality risk classification of “high” or “critical” and continued until April 2021.
In addition to the notification, the intervention directed providers to use a dedicated ACP note template.21 The template included prompts to assess patient illness understanding, explore patient goals, fears, and worries, address, and document code status, and identify a surrogate decision maker (Appendix 1). The template was updated in January 2021 based on direct provider feedback, and these additions are noted in the appendix. Hospitalists received optional ACP education during a faculty meeting, and educational materials were included in each notification email. Ancillary teams were also included in the notification to help improve the care for these high-risk patients. This included the pharmacy, case management, and documentation and coding teams.
Providers were eligible for notification if they were internal medicine residents, advance practice providers, or attending physicians at DUH. Patients were excluded if they were admitted to the ICU within 24 hours of admission, to observation status, or to a non-general medicine service. In the post period, patients were also excluded if they had a recently completed ACP note, had an established comfort care directive, the patient was discharged or planned for discharge prior to screening by the QI team administrator, or due to technological issues such as server downtime or model maintenance.
A closed and open-ended questionnaire was sent in August 2020 to hospital medicine providers to obtain provider feedback regarding the ACP notification intervention. Providers were asked to rate (strongly agree, agree, disagree, strongly disagree) the ACP notification’s impact on patient care, the accuracy of identified patients, and their satisfaction with the notification. A reminder to complete the survey was sent twice at two-week intervals.
The Duke University Institutional Review Board (IRB) approved and determined this study exempt (IRB Pro00104527).
Study Outcomes and Measures
The primary outcome was documentation of ACP conversations with at-risk patients. Providers were asked to document their conversation using a developed template, which was then queried to ascertain whether the primary outcome occurred (binary yes/no).
Secondary outcomes included LOS in days from inpatient admission to discharge alive. Change in code status was a binary indicator that code status changed from more intervention to less intervention during the hospital stay indicated by a new DNAR order from the time of admission to the time of discharge. ICU transfer was a binary indicator that a patient was transferred to or received ICU level of care during their stay. Since hospice referral was contingent on being discharged alive from the hospital, the outcome of hospice referral was defined as a 3-category nominal variable with levels defined as inpatient death, discharge without hospice referral, and discharge to hospice. Readmissions within 30-days of initial discharge were defined following the Centers for Medicare Services Hospital-Wide All-Cause readmission definition for unplanned readmissions to our health system and quantified for analysis by the length of time (in days) from hospital discharge alive to unplanned readmission to the same facility. Patient demographic information and clinical details were extracted from the EHR.
Cohorts
A historical control cohort was created by identifying patients discharged from general internal medicine hospitalist providers at DUH from January 1st 2019 until October 31st 2019 using the same machine-learning model. The identified patients were those identified using the mortality risk threshold classifications of “medium”, “high”, or “critical” at 30-days and “high” or “critical” at 6-months (supplemental Table 1).20
We analyzed the primary outcome during the pre- and post-intervention periods and in 5 distinct time periods: pre-intervention (1/1/2019-11/17/2019), pilot phase (11/18/2019-2/14/2020), start of notifications to teaching teams (3/26/2020-6/25/2020), pause in teaching team notification for new resident education (6/26/2020-7/21/2020), and teaching team notification resumes (7/21/2020-4/30/2021) (Supplemental figure 1).
Statistical Analysis
The primary analysis examined the probability of ACP documentation before and after intervention implementation, using 5 distinct intervals described above. Probability of ACP documentation was modeled using Modified Poisson regression to approximate risk ratios (RR), adjusting for patient- and faculty-level confounders, using generalized estimating equations (GEE) with exchangeable working correlation and sandwich standard errors clustered at the provider level.22 Estimated RRs measure change in probability of receiving ACP from the pre-period to each follow-up period. Risk differences were derived using the delta method. An unadjusted intraclass correlation coefficient (ICC) for provider was calculated using the one-way ANOVA method.23
Secondary analysis examined associations between ACP documentation and patient outcomes. Propensity score overlap weighting methods were used to balance patient characteristics between those that did and did not receive ACP, with mixed effects logistic regressions to predict receipt of ACP, and random intercepts at the level of the provider.24 Standardized differences were computed to compare characteristics of patients before and after weighting, with values smaller than ±0.10 considered acceptable.25
In this high-risk population, death may be considered a semi-competing risk with some outcomes; therefore, LOS and readmissions were analyzed using survival analysis methods. For LOS, accelerated failure time (AFT) models with lognormal time distribution were used with an outcome of days to discharge alive and treatment arm as the sole covariate.26 AFT-modelled associations were expressed as event time ratios (ETR) with ratios <1 indicating shorter LOS, >1 indicating longer LOS, and =1 indicating no association of ACP with LOS. Rates of 30-day readmission by ACP status were analyzed using Cox proportional hazard models with days to readmission as the outcome and death as a censoring event, with estimates expressed as hazard ratios.27 HRs >1 indicate a higher readmission rate, HR <1 indicate a lower readmission rate, and HR =1 indicate no association between ACP and 30-day readmission. Analysis of hospital readmission eliminated some patients due to denominator eligibility criteria and eligibility for the readmission sample could plausibly have been affected by ACP, thus separate propensity score models were fit for patients eligible for analysis of 30-day readmission (Supplemental Table 2).
Companion regression models were included with time to death as the outcome and discharge alive or 30-day readmission events as censoring events in the LOS and 30-day readmission analyses, respectively. These analyses provide context to the LOS and 30-day readmission estimates by assessing whether the relationship between ACP and LOS or 30-day readmission may have been influenced by increased or decreased mortality in patients with ACP.
Probability of ICU transfer was analyzed using log binomial models to estimate RRs. Odds of hospice referral were modeled using multinomial logistic regression with outcome levels of discharge without referral, inpatient death, and discharge with referral.
For secondary outcomes, overlap-weighted regressions were considered primary with unweighted regressions reported for context. Percentile based CIs were calculated to account for repeated hospitalizations and uncertainty in propensity score estimation, with 1000 resamples at patient level.
Analyses used Stata Software version 17.28
Results
A total of 739 hospitalizations, 663 unique patients, with elevated risk of death within 30 days were extracted from the EHR; of these hospitalizations 479 met the criteria for inclusion in the analytic sample (197 encounters pre-intervention and 282 encounters post intervention) (Figure 1). Mean age was 75.1, 53.2% were male, and patients were predominantly White (58.2%) or Black (36.1%). A majority (59.7%) were on Medicare insurance with patient sociodemographics comparable between the pre and post period (Table 1). There were similar proportions of patients with medium risk scores in the pre and post periods with all “critical” patients seen in the post period.
From pre- to post-intervention periods, there was an increase in the proportion of patients receiving ACP. Adjusting for potential confounders, proportion of higher-risk patients with documented ACP rose from 6.0% (95% CI: 2.0-10.0%) at baseline to 56.5% (95% CI: 41.9, 71.1%) in the final months of the intervention (Adjusted risk difference = +50.5%, 95% CI: 36.5 – 64.6%) (Figure 2). Results remained similar when encounters occurring during periods when resident teaching teams did not receive notifications were excluded from the analysis. Calculation of ICC revealed evidence of modest clustering of probability of receiving ACP by provider (ICC=0.105, 95% CI: 0.023 – 0.186).
Comparisons of standardized differences prior to weighting indicate significant imbalance in characteristics between those who received ACP and those that did not, with all standardized differences reducing to less than ±0.10 after weighting, with the exception of provider age in the readmission sample (Supplemental Material).
Patients with ACP were more than twice as likely to have a new DNAR order placed during their admission when ACP was documented (29.0% vs. 10.8% RR=2.69, 95% CI: 1.64 – 4.27). Additionally, patients with ACP had twice the odds of hospice referral at discharge (22.2% vs. 12.6% (OR)=2.16, 95% CI: 1.16 – 4.01).
Weighted mean LOS, when discharged alive, was 7.6 days (SD=6.8) for patients with no ACP and 9.7 days (SD=9.6) for those with ACP; patients with ACP documentation had a 29% longer LOS (ETR=1.29, 95% CI: 1.10 – 1.53) compared to those without ACP. A higher weighted proportion of patients with ACP died as inpatients than those without ACP (7.2% vs. 12.4%, for no ACP and ACP, respectively; RR=1.71, 95% CI: 0.94,3.55), though time to inpatient death was comparable between those with ACP and those without (ETR=0.96, 95% CI: 0.57 – 1.51).
Proportions of patients with 30-day readmission were similar between those with ACP documentation (17.2%) and those without ACP documentation (18.5%) and confidence intervals for all readmission related outcomes included the null (Supplement Table 5).
Provider survey
20 providers (20/41, 49%) responded to the survey. Results are presented as agree (strongly agree or agree) or disagree (strongly disagree or disagree). A majority of respondents agreed that the notifications improved care delivery (15/20, 75%), were valuable to clinical care (15/20, 75%), and that they accurately identified patients for ACP (19/20, 95%). Providers also agreed that the notification process was not difficult to navigate (17/20, 85%), was not burdensome to their daily work (13/20, 65%), is a system they would like to continue (14/20, 70%), and is something they would recommend for other specialties (18/20, 90%).
Discussion
This single-center QI study evaluates the use of an ML mortality risk prediction model to improve ACP conversations and documentation for inpatient general medicine patients. It demonstrated that an ML model coupled with provider email and page notifications can increase the documentation of ACP conversations.
Similar to other studies using ML mortality risk prediction models to identify a high-risk patient population for ACP conversations for other clinical services, this project demonstrated an increase in documentation of these conversations in the EHR.17,18,29,30 Our study, further demonstrates the ability to utilize front-line providers to complete ACP conversations and documentation.31 This is an important finding as patients often present with acute decompensation of chronic illness or are diagnosed with a life-limiting illness during an inpatient encounter.
The substantial and sustained increase in ACP documentation in our study demonstrates how clinicians adapted ACP into their clinical workflow. Clinicians integrated the notification and ACP documentation template, where more than 50% of identified patients had ACP documented during their hospital encounter. Perhaps this is due to a manageable number of notifications clinicians received over the intervention period. Clinicians received an average of 4.3 notifications during the analyzed intervention period. This demonstrates that with education, reminders, and notifications, frontline clinicians were able to integrate ACP documentation within their clinical workflow.32 While the model did not capture all patients who may die in six months, the model did successfully identify 238 (49%) patients who passed away in this timeframe. These notifications allowed clinicians to prioritize time to focus on ACP conversations for these high-risk patients.
Completing ACP conversations also affected code status, with an increased proportion changing from full code to DNAR prior to discharge. While a patient’s code status may not communicate their goals or values, it reflects an important aspect of patient care and dictates resuscitation management in the event of cardiopulmonary arrest, illustrating one aspect of the patient’s end-of-life goals.33 We also found that patients with documented ACP were more likely to be discharged with hospice services.
Completion of ACP documentation was associated with longer LOS. This could be due to biases that we could not control for in the analysis, in which providers selected patients for ACP that they perceived to be sicker and, thus, more likely to have a longer LOS. This selection bias may be supported by the finding that patients with ACP documentation had higher inpatient and 30-day mortality in the readmission analysis.
Additionally, while our current study focuses on implementing and impacting an already developed ML model, we acknowledge the necessity of comprehensive evaluations to promote fairness across different demographic and socio-economic groups. This can be done through methodologies like transfer learning and leveraging social determinants of health data.34 Though beyond the scope of this study, we highlight this as a critical direction for future research to ensure equitable benefits from ML in healthcare.
Limitations
There are several limitations to this project. First, the pre-post, non-randomized design at a single institution may limit the generalizability of the results. Future studies should incorporate randomized controlled trial designs to better evaluate clinician response to the notification. This study used an ML model developed retrospectively on a patient cohort at a single health system, which may limit generalizability. However, the mechanism to notify hospital medicine clinicians of patients at high risk of death may be implemented in other health systems. Second, due to the pragmatic nature of this study, the measurement of ACP conversations was limited to the ascertainment of the presence of documentation, which may underestimate actual ACP conversations and does not account for the quality of these conversations. Future research should measure the quality of conversations and documentation to better evaluate the content of these conversations.21 Finally, though propensity score and regression adjustment models may have reduced confounding bias, our ability to make statements about causality rests on the assumption that we have included all relevant confounders, which is inherently untestable. Thus, relationships may represent associations rather than causal effects.
Conclusion
Frontline provider notifications using an ML mortality model to identify high-risk patients resulted in a sustained increase in completion of ACP documentation in the inpatient setting. This suggests that targeted provider notifications can result in increased ACP for patients with serious illness.
Author Contribution
All Authors have reviewed the final manuscript prior to submission. All the authors have contributed significantly to the manuscript, per the ICJME criteria of authorship.
-
Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
-
Drafting the work or revising it critically for important intellectual content; AND
-
Final approval of the version to be published; AND
-
Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved
Funding/disclosures
Dr. Ma’s effort was supported by the University of Michigan HEAL K12NS130673. The statements presented in this article are solely the responsibility of the author(s) and do not necessarily reflect the position, views, or policy of the sponsors.
Alyssa Platt’s effort was supported by Grant Number UL1TR002553 from the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH), and NIH Roadmap for Medical Research. The statements presented in this article are solely the responsibility of the author(s) and do not necessarily reflect the position, views, or policy of the sponsors.
Acknowledgement
We would like to acknowledge Chuan Hong PhD for advice and support on the statistical analysis.
Corresponding Author
Jonathan Walter MD
Department of Medicine, Division of General Internal Medicine,
Duke University School of Medicine, 40 Medicine Circle, DUMC Box 3534, Durham, NC 27710
Phone: 270-210-2494
Email: jonathan.walter@duke.edu