From: redpointai

The application of Artificial Intelligence (AI) in healthcare, particularly in virtual care settings and the development of AI doctors, presents unique challenges despite its immense potential for efficiency and cost reduction [00:04:37]. While AI can significantly impact healthcare by transforming informal language into formal data (and vice-versa) [00:01:40], its full integration into clinical practice faces several hurdles.

Limitations of AI in Clinical Use Cases

Large Language Models (LLMs) are incredibly adept at translating between formal and informal language, which is crucial in healthcare given the mix of highly formalized codes (ICD-10, CPT) and human conversations (patient-provider dialogue, medical record notes) [00:02:02]. However, implementing AI in healthcare systems for clinical applications currently faces several limitations:

  • Subtle Contextual Knowledge

    • Human-written medical summaries often contain subtle contextual knowledge that LLMs cannot possess [00:07:17]. This includes a provider remembering previous conversations with a patient that aren’t explicitly documented [00:07:28].
    • Geographical or environmental factors, such as local transportation options or daily weather, are critical contextual pieces known to human care agents but not readily available to LLMs unless specifically provided [00:09:06].
    • To improve LLM performance in these “wide-open” clinical scenarios, it’s essential to enhance the LLM’s “horizon of knowledge” by feeding it more comprehensive and relevant data [00:07:53].
  • Unclean Inputs and Outputs

    • Unlike administrative tasks with clean, structured inputs and outputs (e.g., claims processing) [00:06:14], clinical use cases often involve less standardized data [00:06:43]. This “unfair playing field” makes it challenging for LLMs to perform accurately [00:07:41].
  • False Positives in Medical Extraction

    • LLMs can struggle with specific medical concepts that have different meanings in layperson’s terms versus formal medical definitions [00:33:01]. For example, the term “post-traumatic injury” is a common source of false positives for LLMs when deciding on care authorization because the model’s training data includes a broader, more common understanding of the term, not the specific, regulated definition used in utilization management [00:32:00]. This issue can be mitigated by having the LLM generate its own self-consistency questionnaires and breaking down complex questions into sub-prompts [00:33:38].

Specific Challenges for AI Doctors

While the idea of an AI doctor is promising given the algorithmic nature of medical knowledge [00:57:05], several practical issues remain:

  • Safety and Direct Patient Interaction

    • One of the primary challenges and strategies in AI deployment for clinical AI is safety [00:57:42]. It is currently very difficult to allow an LLM to talk directly to an end-user due to the risk of hallucinations or biases [00:04:49], [00:57:44]. Current solutions involve a “human in the loop,” where a doctor or care guide reviews AI-generated summaries and explanations before they reach the patient [00:50:57].
  • Necessity of Physical Interaction

    • Despite many medical claims being suitable for virtual settings (an estimated two-thirds) [00:58:18], certain procedures or examinations require in-person interaction (e.g., a foot exam for a diabetic patient) [00:58:31].
    • This requirement for physical interaction leads to “leakage” from virtual care [00:59:31]. As long as lab tests and hands-on physical assessments cannot be conducted virtually, AI cannot entirely replace human physicians or fundamentally transform the healthcare system [00:59:36].
  • Business Model Incentives

    • A significant systemic challenge is the lack of incentive for large health systems to shift towards lower-cost virtual care channels [00:59:50]. Such a shift could lead to pressure from insurance companies and the government to reduce reimbursement costs, potentially impacting capacity [01:00:06].
    • While insurers are well-positioned to deploy automated virtual primary care, they often lack sufficient member engagement to do so effectively [01:00:17].

Current State of AI in Clinical Contexts

Currently, clinical chatbots are considered “overhyped” compared to their actual capabilities and immediate applicability [01:00:46]. However, advancements in voice outputs are seen as “underhyped” and offer significant potential for progress, especially in non-clinical contexts [01:00:56].

Despite these challenges and advancements in AI technology, the goal remains to harness AI for its transformative potential in healthcare, recognizing that administrative use cases currently yield faster, more tangible results than complex clinical applications [00:05:08].