Saturday, November 1, 2025

Visual 10: Explainability, trust and layers of clinical decision making in pre and current AI LLM era

Explainability, trust and layers of clinical decision making in pre and current AI LLM era:

EBM layer: This layer is the one our clinical decision making lab is largely engaged in although the other two layers are no less important.

We have already shared something around those in our previous demos particularly our two video links shared above.

Human layer: This is the most important layer where clinical decision making actually happens at multiple human stakeholder levels:

Below are recent examples of the limits of scientific explainability and it's effect on human trust.


This was human forwarded through WhatsApp and possibly AI generated. So should we call it human generated with AI in the loop or AI generated with human in the loop? 

Well as mentioned before here : https://medicinedepartment.blogspot.com/2025/11/visual-4-what-is-intelligence-gim.html?m=0 all human intelligence is AI!


How much Trust building can one achieve through Human clinical decision making with AI in the loop?




Human mistrust due to persistent uncertainty due to scientifically limited explainability ?


Images of subclinical hypothyroidism patient data:






Human full trust inspite of persistent uncertainty due to scientifically limited explainability 







Can AI act as a guard rail for human mistrust due to lack of communication and explainability?

All the real patient individual demonstrations above take a closer look at individual patient events producing their unique event data trajectories that are perhaps simpler in terms of explainability and interpretability than what may have happened if we tried to inter connect many more individuals with common attributes to predict individual trajectories based on past similar individual trajectories!

Using the language of what we have labeled as "machine layer":

"While random forests often achieve higher accuracy than a single decision tree, they sacrifice the intrinsic interpretability of decision trees. Decision trees are among a fairly small family of machine learning models that are easily interpretable along with linear modelsrule-based models, and attention-based models. This interpretability is one of the main advantages of decision trees. It allows developers to confirm that the model has learned realistic information from the data and allows end-users to have trust and confidence in the decisions made by the model.[39][3] For example, following the path that a decision tree takes to make its decision is quite trivial, but following the paths of tens or hundreds of trees is much harder."


Summary of current clinical decision making workflow:


So What? SWOT 


S

trengths: Human centred management, Creativity 


W

eaknesses : User Interface: Asynchronous, academic flatlands 


O

pportunities : Prelude to the symphony of Singularity 


T

hreats: TLDR, DPDP 

No comments:

Post a Comment