The Power of AI in Health: Lessons from Duke’s Summit on Health Innovation

November 8, 2024

Globant sponsored the Duke Summit on AI for Health Innovation, organized by Duke AI Health and Duke Pratt School of Engineering, on October 9. The three days of the summit brought together Duke Health clinicians, Duke University faculty, engineers, and innovators, as well as industry leaders from organizations such as Johnson & Johnson, LabCorp, IBM, NVIDIA, Mark III Systems, and Globant to discuss the potential of AI in shaping the future of healthcare as well the risks and challenges associated with it. I had the honor of sharing Globant’s experience designing and building AI solutions for healthcare and life sciences during a panel discussion on industry experiences in health AI

Four themes emerged from the many presentations, panel discussions, and break-out sessions. Let’s take a closer look:

  1. Practical AI Applications in Healthcare

Panelists and speakers emphasized the importance of clinical usefulness and not just the technical validity of AI. Showcased examples of AI applications in healthcare that are actionable and useful included:

  • Sepsis Watch: A model developed at Duke that predicts the onset of sepsis hours before it becomes critical, helping clinicians take preventive measures.
  • Auto-Reader of the EKG: AI models that are good enough in an emergency department context to determine if a person is having a heart attack or not (these tools are not, however, useful to identify different types of arrhythmia or other subtleties in the EKG).
  • SDOH Data Extraction: The use of LLMs such as GPT-4 or Llama 3.1 to extract SDOH data from unstructured clinical notes supporting transplant evaluations and aiding in prior authorizations for surgery.
  • Data Extraction for Clinical Documentation: Large language models (LLMs) are applied to extract clinical variables from unstructured physician notes to streamline data entry, reduce errors, and enable data sharing across institutions. They are also used to create auto-completion systems personalized for individual patients, helping clinicians document more efficiently without adding to their workload. 
  • Optimization of Utilization of Resources: AI models that predict for how long an operating room would be needed, how long any given patient would be staying in the hospital, which supplies should be refilled or replaced and when, etc.
  • Wearables and Digital Biomarkers: Leveraging AI to establish the relationship between health outcomes and data collected continuously with wearable devices.

 

  1. Building Trust in AI

Trust was identified as a key barrier to AI adoption in healthcare. Throughout the summit, speakers, panelists, and participants identified factors that can help build trust among clinicians and patients:

  • Model Evaluation and Monitoring: Trust can be built by ensuring rigorous evaluation and post-deployment monitoring of AI tools. This allows clinicians, end users, and patients to see that AI models consistently perform as expected, both during training and in real-world applications.
  • Transparency and Bias Mitigation: Making AI models more transparent, especially in how they handle training data and decision-making, helps build confidence. While bias cannot be fully eliminated, open-source platforms for developing and testing models, can help ensure fairness and accuracy across different patient populations. Bias mitigation can also be accomplished through workflows in which AI models are deployed.
  • Clear Use Cases: AI must solve tangible problems and fit within existing workflows. The most successful AI tools have been those focused on extracting actionable information, like SDOH (social determinants of health) data from clinical records, while the least useful have been in areas like risk prediction, which often lack actionable steps attached to them.

 

  1. Cross-Functional Collaboration and Stakeholder Involvement

The role of interdisciplinary collaboration was highlighted as a requirement for designing and building useful AI tools that work:

  • Bridging Disciplines: AI in healthcare can be successfully implemented when experts from medicine, engineering, and other fields collaborate. Collaboration by design (when the team or the organizational structure fosters collaboration) was suggested to bridge different disciplines.
  • Humility and Curiosity: Successful collaboration requires the willingness to ask naive questions and learn from each other. Establishing a common language across disciplines is essential to foster understanding and problem-solving.
  • Stakeholder Involvement: Involving end users and stakeholders impacted by or impacting the use of AI tools – such as clinicians, patients, and caregivers – early in the development process ensures that AI tools are built to address real-world needs. Solutions that align with actual workflows and challenges are more likely to succeed.

 

  1. Challenges in AI Implementation and Adoption

Several challenges were discussed regarding the implementation and adoption of AI in healthcare:

  • Data Quality and Bias: Data quality used to train AI models is crucial. Implicit bias within the data and dataset shifts can severely affect model performance when the distribution of data used to train a model differs from the distribution of data the model encounters in the real world. Ongoing evaluation is necessary to prevent these issues.
  • Integrating AI into Workflows: To be widely adopted, AI tools must seamlessly integrate into existing clinical workflows. If workflows are unstable or chaotic, adding AI can exacerbate existing problems rather than solve them.
  • Scalability: Scalability emerges as a significant challenge in implementing AI in healthcare, particularly due to the need to process and manage massive amounts of data, to break down existing data silos, and to champion collaboration across disciplines. 
  • Trust and Actionability: Even if a model works well technically, it must also lead to clear, actionable outcomes. Tools that fail to provide useful, actionable insights are less likely to be adopted by healthcare providers.

 

The three days of presentations and discussions among cross-disciplinary groups of experts revealed that while AI offers exciting potential, it needs to be purpose-built, action-driven, and trust-centered to improve healthcare outcomes genuinely. Despite the identified challenges, the overall sentiment was optimistic. 

AI is here and will shape the future of healthcare. Its role is not about replacing humans, but about eliminating defects, errors, and waste while augmenting human cognition to enable new insights and discoveries that will drive personalized treatments and better patient outcomes.

Download Globant’s latest report to uncover how the convergence of data analytics, AI, and user-centric innovations is reshaping traditional health management approaches and creating exceptional patient experiences.

Trending Topics
Data & AI
Finance
Globant Experience
Healthcare & Life Sciences
Media & Entertainment
Salesforce

Subscribe to our newsletter

Receive the latests news, curated posts and highlights from us. We’ll never spam, we promise.

The Healthcare & Life Sciences Studio aims to reinvent the life sciences industry ecosystem through tangible technology-driven solutions. Globant aims to bridge the gap to help life sciences and healthcare organizations to achieve their mission of delivering innovation and services faster and more efficiently to enhance patient value and improve outcomes.