Sunday, October 26, 2025

AI in Decision Making for User-Driven human centered Healthcare: Explainability and Trust.

Never doubt that a small group of thoughtful committed individuals can change the world. In fact, it's the only thing that ever has."



Counterview to above in the link below illustrates the broader meaning of cision and de-cision (expanded again further down): https://www.ohiocitizen.org/about_that_margaret_mead_quotation


Introduction 




to our team and human centered clinical decision making lab: 

Our team consists of full time educators and practitioners of clinical decision making with a special interest in developing creative skills in learners and online users while tackling clinical decision making issues utilising current AI tools and descriptive models that not only create a case based medical informatics ecosystem but also promote explainability and trust among humans.

 

Our team is currently working on:

Among other things such as daily clinical decision making with our patients, a book on the same topic, named "Troubleshooting Humans in the Age of AI: A Playbook for Multidisciplinary, Participatory, Medical Cognition," and we are looking for potential chapter authors and elective trainees who will also enjoy this journey of clinical decision making to trouble shoot humans in our blended learning ecosystem in the coming months.



Case based medical informatics descriptive database: 





Session learning Goals:

Short term: brief hands on interactive exposure to descriptive models of clinical decision making in the pre AI and current AI era and challenges posed toward explainability and trust in blended learning ecosystems

More here: 




Long term: collaborate to potentially develop a special group of human centered healthcare managers trained in AI driven clinical decision making that takes into account an individual patient's internal medicine (anatomy physiology, pathology, organ systems) and external (organisational, public health, environmental exposure and community medicine) drivers to optimise their overall healthcare outcomes using tools such as creative critical realist heutagogy.

More here: 


Narketpally syn (critical realist heutagogy) : https://pubmed.ncbi.nlm.nih.gov/40674544/



Session learning objectives:

1) Demonstrate a hands on approach to AI driven clinical decision making utilising past cases as well as cases from the floor shared by the audience 

2) Discuss briefly the evolution of clinical decision making pre and post AI

3) Discuss issues around how to bridge the inferential gap between multiple stakeholders in clinical decision making such as patients, their relatives, health professionals and policy makers through AI driven explainability 

4) Discuss how to gain and maintain trust between multiple stakeholders through judicious optimization of intelligence driven explainability.





1) Hands on demonstration:








Image above: Current AI driven clinical decision making workflow as well as user interface and medical cognition engine.

Rhetoric : There's this unfortunate dimensionality reduction leading to what is aka academic flatlands that hides multiple hidden layers, which can pose issues with explainability at a machine level. 

All this is a product of evolution right from primordial asynchronous intelligence driven communication in cavemen aka primordial AI (as discussed further ahead)!


A layered approach to clinical decision making: 

We are all apprentices in a craft where no one ever becomes a master.
Ernest Hemingway, The Wild Years

Human, Scientific and Machine layers 

Anatomy of cognitive layers:





RUAAEC


More here on the bloom game of learning cognition: https://sites.pitt.edu/~super1/lecture/lec54091/001.htm
Bloom's taxonomy image copyright as well as an enlightening write up: https://www.niallmcnulty.com/2019/12/introduction-to-blooms-




AI and human looped clinical decision making:


The human layer and Ux interface

  • "Sometimes the smallest things take the most room in your heart." —
  • Winnie the Pooh

1) AI LLMs as simply a machine translator in our multilingual workflow 

2) The human layer of explainability other than the analytical and machine layers



Samples: 






Consent form: Machine translation provides an added feature to our informed patient consent form that allows a single click translation to any global language!


Let me know if the konkani seems right!

In case it's not we have a manual back up here used routinely for majority of our patients: 


The above is one layer of explainability and raising awareness about patient rights including right to privacy.






Above is a layer of manual human to human translation as well as intermittent problems in an otherwise complex patient with comorbidities (will discuss again in the next layer of AI driven analysis)






Again this patient does have comorbidities related to his metabolic syndrome such as heart failure but then intermittent simple human requirements of explainability manifest in his daily sharing through his advocate such as the one here that manifests in his sleep and meta AI helps not just to translate it but also explain it well.






This patient is an example how human simple explainability backed by scientific evidence can provide a new lease of life to a patient of myocardial infarction who travelled the long distance to our college just for that explainability to strengthen his prior trust in us!

Past published work on similar patient: 

Translation in action:








Audience cases 

 2) Evolution of clinical decision making 

pre and post AI

"Picture of brain ANN" 

What is intelligence?

Rhetoric:

There is nothing called artificial or real intelligence.

What exists is just "intelligence" that is embedded in all manifest beings in this Earth right from apparently inanimate plants to animate humans including embedded artificial systems that humans appear to be currently developing because of their own intelligence deficiency.

This human intelligence deficiency (requirement) is primarily memory because of which humans have been cursed with their penchant for dimensionality reduction resulting in asynchronous intelligence aka primordial AI

This is the reason modern humans have become more and more academic as academics is largely all about communicating and learning through a two dimensional interface, be it early cave paintings or current day xy axis mobile screens!

More here:


What is cognition?



What is decision?

Picture "Cutting a vegetable with a knife"

Cision is the process of cutting originated from Proto-Indo-European language as *skh1-ie, *skh1-io, and that is related to the Proto-Indo-European *kh₂eyd- (to cut, hew), whence also Latin caedō (to cut, strike) in Sanskrit aka खिद्

Looks like Europeans added an s to the beginning of khid in Sanskrit aka caed in Latin and then subsequently removed the d when they used cutting as a metaphor for science!

The word science is derived from Proto-Indo-European language as *skh1-ie, *skh1-io, meaning "to incise". https://en.m.wikipedia.org/wiki/Science

So imagine some of the cutting instruments you know of and check out their names: 


Verbs: incise, size up, cut to size 



Needless to say, not everything ending with cise means cutting such as the words "exercise" and "exorcise" apparently have no connection to cutting: https://www.etymonline.com/word/exorcise



Image with CC licence: https://commons.m.wikimedia.org/wiki/File:Sickle_and_throwing_knife_at_Manchester_Museum.jpg#mw-jump-to-license

And the image of the sickle and science is contained in an important writing tool for science! The question mark is a very important instrument of scientific scepticism:



Creative commons license: https://en.m.wikipedia.org/wiki/Question_mark#/media/File%3AQuestion_opening-closing.svg


To reach a de cision is to stop cutting and stop questioning further! As in de addiction or de escalation, which means to stop addiction or stop escalation!

In other words going with the cutting edge pictorial cooking analogy above, one simply moves to the next phase of cooking once the cutting of it's ingredients is over.



Decision etymology:

https://www.reddit.com/r/etymology/comments/2eujw6/where_did_the_root_cision_come_from_decision/


What was clinical decision making like in the pre AI LLM era?

Video demo of our patient centered, clinical decision making lab: 


https://youtu.be/ZKoljY2UBHI?si=UYUfpTD7JGOgoQhA


https://youtu.be/xvE5b8Xk3vM?si=dqDlPQgA_EP2L7zT

Video demo of a single patient's decision making: 


https://youtu.be/csF8VQbOYRo?si=mlbHXIyD5A-29uqf


Explainability, trust and layers of clinical decision making in pre and current AI LLM era:

EBM layer: This layer is the one our clinical decision making lab is largely engaged in although the other two layers are no less important.

We have already shared something around those in our previous demos particularly our two video links shared above.

Human layer: This is the most important layer where clinical decision making actually happens at multiple human stakeholder levels:

Here's a recent example where we quickly begin with explainability both human and AI as well as the limits of scientific explainability and it's effect on human trust.



Amazon "Help me Decide"!


👆 Quantitative AI driven clinical decision making is currently here?

Machine layer


No comments:

Post a Comment