Saturday, August 30, 2025

Lecture module: Current role of AI in diagnostics and decision support and tech singularity driven merger of system 1 and 2 cognition

Learning objectives:

1) Clarifying terminologies: Diagnostics (process) vs Diagnosis (product)

2) Decision doesn't end at diagnosis: resolving therapeutic uncertainty (the final product)

3) Introduction to a real world, 24x7, AI LLM user driven diagnostic and clinical decision support workflow with it's past and current background

4) The AI and human driven user vs developer perspective in clinical decision making 


Introduction:

Diagnostics (what is the process):

Conversational Transcripts from an interaction between a developer and user (to also understand the differences in their perspective)

[30/08, 22:06]hu1: Our wearable device captures a comprehensive set of physiological parameters, including:

Cardiac
4 Lead ECG
Pulse Rate
HR / HRV
Stroke Volume
Cardiac Output
Cardiac Index
Pulse Pressure
Arterial Mean Pressure (MAP)
SVR
NiBP (under validation)

Respiratory
Pleth-SpO₂
Respiratory Rate
Thoracic Impedance

Other
Body Temperature
Accelerometer (patient position)

All data is collected with *medical-grade accuracy*, seamlessly streamed to a bedside monitor (tablet) and securely transmitted to the cloud for *AI & algorithmic computation*. This enables near real-time alerts with *Early Warning Score (EWS)*, *Sepsis prediction*, and *Heart Failure (HF)* detection.

We are actively advancing toward *CDSCO* regulatory clearance


[31/08, 12:36]hu2: Medical grade accuracy is good but one needs to keep the broader process of diagnostics in mind along with the initial product (diagnosis) and final product (therapeutic outcome).

These data points are currently given a lot of weightage in diagnostics so much so that the meaning of the term "diagnostics" has changed to mean something limited to lab or imaging based parameters and this is something that needs to be debated.

Diagnostics is a collage of multimodal data points tied into an ontology:

It begins with patient history as in life events data points that can be discreet or continuous.

Trying to understand patient requirements from life events generated data can be analogous to prompting an AI LLM!


The other milestones in a diagnostic workflow process:

Clinical examination

Physician or Machine driven Objective vs patient driven Subjective data points 

Labs

X ray

MRI

How much weightage can one give to each in the decision making workflow?


Diagnostics (what it isn't)!

Throwing a few lab reports (currently masquerading as objective data but actually suboptimal data) to a human posing as a physician agent and expecting him her to create magic with suboptimal data!

Your diagnostics and decision support will be as good as the data you share! Beware of GIGO!

The role of laboratory vs clinical encounter 

Methods: 

Clinical problem solving workflows 

Medical cognition and procedural workflows toward diagnostic and therapeutic outcomes 

Layers of diagnosis:

Anatomical

Etiologic 

Past published discussion on critical realist heutagogy CRH and AI driven workflows: 

Narketpally syn as a case based blended learning ecosystem CBBLE and it's PaJR pajr.in platform with detailed description of CRH and our medical cognition workflow 



CBBLE as an early progenitor of the current Narketpally syn: 


Current regular methodological uploads archived:

2024:


2025:


2020:


Results:

Recent real patient outcomes:

An individual patient problem necessitating diagnostic and clinical decision support (human as well as AI driven): https://pajrcasereporter.blogspot.com/2025/08/56m-traumatic-pain-in-left-great-toe-1.html?m=1

More individual AI driven real patient diagnostics and decision making process regularly updated here: https://pajrcasereporter.blogspot.com/?m=1and past 5000 records here: https://medicinedepartment.blogspot.com/2022/02/?m=1

Discussion: 

Decision support toward diagnosis and therapy:

The pandemic of Over-testing and Overtreatment resulting from misinformation of diagnostics:

The process of Medical cognition (meta cognition, thinking about thinking to integrate medical education with practice):

System 1 cognition: default fast thinking mode and it's evolution in the dyadic doctor patient relationship 


to

System 2, the slow thinking mode with the added advantage of collective sharing (blessing of dimensionality reduction) and the disadvantage of missing data (curse of dimensionality reduction)


AI and system 2 beginning with primordial asynchronous intelligence to current day automation unleashing an era of pre-tech singularity Orwellian data capture and sharing that will eventually merge with system 1 cognition with further 









Feedback conversational learning loops:

[01/09, 06:17]hu1: Nice!
Have you presented an innovation or analysed issues with deployment and usage of any digital health tool?


[01/09, 08:08]hu2: Both perhaps.

I began by clarifying current misperceptions around the labels diagnostics vs diagnosis and then proceeded to describe our team based learning workflow (not sure if it can be called an innovation but it's a variation from traditional dyadic workflows as described by us a decade earlier here: https://pmc.ncbi.nlm.nih.gov/articles/PMC4587042/ and recently here: https://pubmed.ncbi.nlm.nih.gov/40674544/ where we use critical realist heutagogy driven by AI LLM users as well as non AI users in our CBBLE (case based blended learning ecosystem).

We discussed the current issues with it's usage both from the AI as well as non AI users perspective


[01/09, 09:20] hu1: Thank you

Do they all need facebook accounts and internet access?


[01/09, 09:30] hu2 : Not Facebook currently but WhatsApp.

Previously I was a laptop user as far as mobile computing would go and our ecosystem was in Facebook, which had it's advantages of having individual threaded conversations in groups but had it's own privacy issues and when I found that majority of my patients were WhatsApp users I ditched the laptop (those days WhatsApp wasn't easily accessible from the laptop) , to become a full time mobile phone user and expand our patient user base.

[01/09, 09:39] hu3: Fantastic 👍

[01/09, 10:43] hu4: Thank you for sharing it. Very encouraging approach.


[01/09, 10:51] hu4: *My critique developed with Gemini Pro:*

The current lecture module on AI in diagnostics requires a complete pedagogical overhaul to become an effective educational tool. To fix this, the module must be redesigned into a highly granular, interactive learning experience grounded in specific, actionable frameworks.

*Phase 1: Foundational Redesign and Reorientation*

The first step is to transform the module's core content into a coherent, easily digestible narrative. This involves moving beyond the raw conversational transcripts and reorganizing the material into a clear, progressive structure.

*A. Clearer Core Concepts through Analogies:*

The module needs to start with crystal-clear definitions. Complex ideas should be introduced with simple, relatable analogies.

* *Diagnostics vs. Diagnosis:* Explain this with a cooking analogy. *Diagnostics* is the entire *process of baking a cake*—gathering ingredients, mixing, setting the oven temperature, and monitoring the bake. *Diagnosis* is the *final product*—the finished cake. The ultimate goal, or *therapeutic uncertainty*, is whether the cake will be delicious and meet the needs of the person eating it. This analogy makes the distinction concrete and memorable.

* *Multimodal Data:* Break down the concept of multimodal data into distinct, understandable categories with real-world examples.

    * *Level 1: The Raw Data:* This is the most basic layer, comprising objective metrics like lab results, ECG readings, and imaging scans.

    * *Level 2: The User-Driven Data:* This is the patient's narrative, their subjective symptoms, and their personal history, which the module rightly calls "life events data." This layer is crucial for a complete picture.

    * *Level 3: The Clinician-Driven Data:* This includes the physical examination findings, the physician's observations, and their professional judgment.
    The module must explicitly show how all three layers are essential for a complete diagnosis.

*B. Redefining Learning Outcomes with Dual Taxonomies:*

The current objectives are too broad. They need to be rephrased as actionable, measurable goals using two complementary frameworks: *Revised Digital Bloom’s Taxonomy* and *Fink’s Taxonomy of Significant Learning.*

* *Revised Bloom's:* This provides the *cognitive scaffolding* for the learning journey.
    * *Remembering/Understanding:* "Students will be able to *define* the difference between diagnostics and diagnosis using the provided analogy."
    * *Applying/Analyzing:* "Given a provided patient case, students will *analyze* and *classify* the data points into objective, subjective, and life events categories."
    * *Evaluating/Creating:* "Students will *evaluate* a hypothetical AI diagnostic report and *critique* its potential for bias, proposing a more holistic diagnostic workflow."
* *Fink's Taxonomy:* This adds the *affective and human dimensions* crucial for medical education.
    * *Application & Integration:* "Students will *integrate* their understanding of AI and cognitive bias to *develop* a strategy for a collaborative human-AI diagnostic process."
    * *Human Dimension & Caring:* "Through a role-playing exercise, students will *demonstrate empathy* and a *commitment to ethical care* when discussing a sensitive diagnosis with a virtual patient."

*Phase 2: Building Interactive, Heutagogical Components*

This phase focuses on making the module's core philosophy—*critical realist heutagogy (CRH)*—a tangible reality. The goal is to move beyond simply referencing a methodology to actively implementing it.

*A. Turning Abstract Concepts into Concrete Experiences:*
Abstract ideas like cognitive bias and systemic distortion must be illustrated with concrete, engaging examples.

* *The Mercator Projection Analogy:* Don't just mention it; use it as a central teaching tool.
    * *Explain the Distortion:* Acknowledge that the Mercator map is useful for a specific purpose (navigation) but grossly *distorts reality* by making landmasses near the poles appear much larger than they are. Use the "flattened orange peel" analogy to explain this visual distortion. 
    * *Draw the Clinical Parallel:* Explain that a diagnostic model based on a limited data set—for example, only lab values—is like a Mercator map. It provides a useful, but *distorted view of the patient's reality.* This limited view can lead to a diagnosis that over-emphasizes one metric while missing the patient's full story.
    * *The Solution:* The comprehensive, multimodal approach to diagnostics is the *3D globe*—a complete, accurate representation of the patient's health.
* *Cognitive Bias Simulation:* Create a granular, interactive exercise. Present a virtual patient with a clear "red herring" symptom that would trigger a common cognitive bias, like *anchoring bias* (over-relying on the first piece of information). An AI chatbot can act as a Socratic challenger, prompting the student with questions like, "What other possibilities could explain this symptom? Have you considered the patient's social history?"

*B. Implementing the PaJR/CBBLE Workflow:*
The module must explicitly guide students through the *PaJR (Problem-Analysis Journal & Reporter)* workflow, which is the practical application of CRH.

* *Step 1: The Initial Case Prompt.* Students are presented with a de-identified case in a *PaJR format*. They must act as the "user," deciding which data to input into an AI system.
* *Step 2: Collaborative Analysis.* Students use a platform (simulated or real) to share their initial diagnostic hypotheses with peers. This peer-to-peer feedback is a cornerstone of heutagogy, where learning is self-determined and collaborative.
* *Step 3: AI as a Metacognitive Partner.* The AI system does not give a direct diagnosis. Instead, it provides a "metacognition report" on the student's decision-making process. The report could highlight:
    * *Missed data points:* "You did not consider the patient's socioeconomic status, which is a key social determinant of their health."
    * *Potential biases:* "Your rapid diagnosis suggests premature closure. What steps could you take to avoid this bias?"
    This feedback loop teaches students *how to learn* by helping them think about their own thinking.

*Phase 3: Final Touches for Implementation and Assessment*

The last phase ensures the module is fully functional and ready for deployment.

* *Portfolio-Based Assessment:* Move away from a single final exam. A *digital portfolio* allows students to demonstrate their learning journey, aligning with the heutagogical philosophy. 

The portfolio could include:
    * A completed PaJR case study.
    * A reflection on their experience with the virtual patient and the AI chatbot's feedback.
    * A critique of a recent article on AI bias in healthcare, using the module's new frameworks.
* *Sufficient Context for Educators:* The module should include a detailed "Teacher's Guide" that explains the pedagogical basis, the CRH/PaJR workflow, and how to facilitate the collaborative learning process. This ensures that the module isn't just a document for students but a comprehensive tool for educators.

No comments:

Post a Comment