Harnessing Artificial Intelligence to Reduce Disparities in Maternal and Infant Health
Our nation is facing a maternal health crisis, and it’s only getting worse.
All U.S. women – and particularly Black women – are at risk of dying or experiencing catastrophic health outcomes during or after giving birth. Even more tragic, we know that over 80% of these deaths and nearly 90% of pregnancy-related health conditions can be prevented by making reasonable changes to our healthcare systems and how we care for patients.
The use of artificial intelligence and machine learning to improve public health is a relatively new field that offers real promise to make change. A successful system could set in motion a sea change—for scientific research, healthcare providers, and most of all, mothers, their children, and the generations to come.
We invite you to partner with us in this new initiative from Safe Babies Safe Moms (SBSM) , which will directly impact birthing individuals and families in our nation’s capital and beyond. We welcome financial investment and partners with experience in this area as we work to develop a revolutionary approach to improving maternal and infant health and reducing racial disparities.
Check out our episode on Bright Spots in Healthcare "Safe Babies Safe Moms: Rethinking Equitable Access and Maternity Care" to learn more about how we're working to better support mothers and babies.
Commitment to Action: Creating a safety surveillance system for maternal health
At MedStar Health, we believe that everyone should have access to quality health and mental health care, regardless of race or neighborhood. Safe Babies Safe Moms (SBSM) is a community-driven collaborative launched in 2020 with a vision to reduce maternal and infant mortality, premature births, and low birthweight and to reduce racial disparities in maternal and infant health outcomes – one of the most urgent health challenges facing the United States.
A spotlight on Safe Babies Safe Moms
SBSM is improving maternal health and closing the disparity gap for birth outcomes. In fact, patients who receive prenatal care through SBSM are less likely to experience several maternal morbidity, low birthweight or preterm birth.
In 2023, we launched a Clinton Global Initiative (CGI) Commitment to Action project to use artificial intelligence and machine learning to help reduce disparities in maternal and infant health. This project builds on our research which uncovered sobering insights about disparities in maternal care:
- There are racial gaps in delivering evidence-based care to address major risk factors for adverse outcomes, such as hypertension and anemia.
- Clinicians used more negative and stigmatizing language in notes for Black patients.
Over the next three years, we will use advanced techniques in artificial intelligence and machine learning to build a maternal and infant health safety surveillance system to help ensure clinicians deliver evidence-based care when risk factors are present that could lead to adverse outcomes for birthing individuals and their babies. Our goal is to help care teams deliver tailored care for patients before emergencies happen and help health systems better understand and address structural racism and racial disparities in maternal care.
Year 1
We have developed our first two pilot use cases – anemia in pregnancy and negative tone and sentiment in clinical notes.
- We have developed AI tools that identify birthing individuals with anemia in pregnancy diagnosis and critically low hemoglobin levels. The next step is to connect this to evidence-based care delivery to create an alert system that generates a weekly report, which will be sent to a clinic representative. Once we complete a proof of concept, the goal is to expand this algorithm to other clinical conditions like hypertension.
- We have implemented machine learning models to analyze free-text clinical notes from care providers, identifying biased tone and sentiment in the EHR. The next step is to generate a detailed report weekly for hospital leadership to review.
- Using our machine learning model, we identified biased language embedded in the EHR. Terms like "non-compliant" and "refused" can negatively impact a provider’s perception before patient interaction. We are reviewing these standardized language options and assessing what changes are necessary.
Year 2
We plan to pilot our use cases from end to end, which includes implementing our interventions in the clinical setting and AI model testing, monitoring, and training.
Year 3
We plan to monitor and evaluate our pilot from year 2. The main goal for year 3 is to develop and disseminate our implementation guide.
Contact Us
To learn more about our work to improve maternal and infant health in our nation’s capital, visit MedStarHealth.org/SafeBabiesSafeMoms.
For more information on how to support this work or to inquire about potential collaboration opportunities, please submit the following form and someone from our team will be in touch: