Wednesday, November 5, 2025

Simplified textual content for the flipped classroom session for AI in Decision Making for User-Driven human centered Healthcare: Explainability and Trust.

 Introduction 


Artificial intelligence is changing how decisions are made in healthcare. The challenge is no longer whether machines can process data, but how humans and machines can think together in ways that remain explainable, ethical, and trustworthy. Decision-making in healthcare has always been about more than logic; it is a synthesis of evidence, experience, empathy, and context. As AI systems become embedded in clinical practice, this synthesis is being tested - and redefined.

Evolution of clinical decision making 

pre and post AI





Dual process theory of cognition

Decision-making can be understood as a process that ends in resolution. A simple analogy captures this well: when cutting vegetables, the act continues until a decision is made to stop - that stopping point marks the end of questioning. In the same way, clinical decision-making involves continuous sense-making until a point of closure is reached. What AI alters is not the decision itself, but the way we arrive at it - the flow of cognition that precedes closure.

Human, Scientific and Machine layers :


Anatomy of cognitive layers:






To understand this transformation, decision-making can be visualized as composed of three interconnected layers: the human layer, the scientific layer, and the machine layer.

The human layer represents the experiential and relational aspects of decision-making. It includes intuition, empathy, ethical judgment, and the trust shared between clinicians and patients. This layer gives meaning to data. It is also where uncertainty is tolerated - something machines are poor at. Human decision-making values narrative, context, and the ability to reason under ambiguity.

The scientific or analytical layer provides the structured reasoning behind decisions. It draws upon data, evidence-based methods, probabilities, and statistical reasoning. Here, concepts like sensitivity, specificity, and risk-benefit trade-offs guide judgment. This layer ensures that decisions are not only humane but also verifiable and reproducible.

The machine layer introduces computation. Algorithms, models, and large language systems extend the analytical capacity of the human mind. They identify patterns invisible to human perception and process information at scales previously unattainable. However, they do not “decide” in the human sense. They calculate possibilities. The decision still belongs to the clinician who interprets and contextualizes those outputs.

The critical junction where these layers meet is explainability. Explainability refers to the capacity to trace and understand how a decision - or a recommendation - emerged. It is the bridge between human reasoning and machine logic. When explainability is absent, trust collapses. Clinicians cannot defend an opaque result, patients cannot consent to what they cannot comprehend, and systems lose legitimacy. In healthcare, explainability is therefore not a technical feature; it is a moral and professional necessity.

Role of AI driven infographics in explainability:



In practice, AI-supported decisions already appear in clinical domains such as diabetes management, heart failure monitoring, and sepsis prediction. In each, machine learning models process large datasets to detect early warning signs or guide therapeutic choices. Yet the clinician’s judgment remains indispensable: deciding when to trust the signal, when to override it, and how to explain the outcome to a patient. The true advancement is not automation but augmentation — expanding the clinician’s capacity to reason with data.

Explainability and Trust:


Can AI act as a guard rail for human mistrust due to lack of communication and explainability?


Trust is the other cornerstone. Trust in AI systems depends on their transparency, accountability, and consistency. A trustworthy AI is not one that is always right, but one that can show how it reached its conclusion. In human terms, this is equivalent to reasoning aloud - something machines must learn to approximate if they are to share responsibility in clinical settings.

Data privacy and regulation reinforce this ethical dimension. The Digital Personal Data Protection (DPDP) Act in India is one example of how societies are formalizing expectations of consent, ownership, and accountability in the age of AI. Healthcare decisions increasingly depend on data, and therefore on the governance of that data. Legal compliance and ethical stewardship become part of the decision-making architecture itself.

Visual communication plays an essential role in this new ecosystem. Infographics and decision maps help translate complexity into forms that can be understood by humans. They make algorithmic processes transparent, highlight the limits of data, and invite scrutiny. In this sense, visuals are not cosmetic; they are instruments of trust.

Machine layers:

The machine algorithm will see you now?



There are useful lessons in comparison with consumer technology. Recommendation systems - such as those used in e-commerce platforms that ask “Help Me Decide” - demonstrate how interfaces can guide decision processes through transparency and user control. While the stakes are higher in healthcare, the underlying design principle remains: decision support must empower the user, not replace their agency.

The strengths of human decision-making lie in creativity, contextual reasoning, and ethical judgment. The weaknesses lie in variability, bias, and fragmentation. AI offers opportunities for consistency, scale, and data-driven insight - but also introduces risks of overfitting, loss of accountability, and erosion of empathy. Balancing these forces requires systems designed for cooperation, not control.

Practical applications are already testing this balance. Large language models can assist in tasks like checking whether patient consent forms align with data protection norms, or de-identifying case timelines for research publication. Such uses show how AI can make routine decisions more reliable while keeping the clinician in charge of interpretation and responsibility.

In the end, the goal is not to replace human decision-making but to preserve its essence in a computational era. AI should function as a collaborator - amplifying reasoning, not substituting for it. The future of decision-making in healthcare depends on the integration of human judgment, scientific evidence, and machine support into a transparent, explainable process.

A decision that cannot be explained cannot be trusted.
And a healthcare system that cannot be trusted cannot remain humane.



.

Tuesday, November 4, 2025

Is your PG's AI usage that of a Centaur or a Cyborg: Journal club "NEJM review"

Article in NEJM that shows how PG residents can be classified into Centaurs and Cyborgs in terms of their AI usage!




CC licence: https://en.wikipedia.org/wiki/Liminal_being#/media/File%3APalais_Bourbon%2C_Malerei_in_der_Kuppel_der_Poesie%2C_Szene-_Erziehung_des_Achill_(Eug%C3%A8ne_Delacroix).jpg



Unfortunately the article is closed access and I hope they don't mind my sharing their figure 4




https://www.nejm.org/doi/full/10.1056/NEJMra2503232?trk=feed-detail_main-feed-card_feed-article-content


Here's another article that "offers an in-depth analysis of a groundbreaking study, "Navigating the Jagged Technological Frontier," conducted by leading academic and corporate institutions. They explore key findings, reveal real-world implications, and ponder ethical considerations, ultimately aiming to shed light on AI's role in the future of work including 
  • The Concepts of 'Centaur' and 'Cyborg' Behavior
  • Ethical and Social Implications

Methodology: How Was the Study Conducted?

The research engaged 758 consultants from Boston Consulting Group and employed a comprehensive stratified random assignment method. Participants were assessed on various tasks ranging from creativity to analytical thinking, with or without AI support and the article linked below unpacks these fascinating results."

Unquoted from: https://www.linkedin.com/pulse/decoding-jagged-frontier-ai-centaurs-cyborgs-future-work-dan-martines#:~:text=Unlike%20the%20Centaur%20model%2C%20where,complex%20challenges%20in%20real%2Dtime.

The name Centaur for humans who are supposedly superior in intelligence may have stemmed from the fact that centaurs are deemed to be liminal beings "combining two distinct states of simultaneous existence within one physical body. This unique perspective may provide the liminal being with wisdom and the ability to instruct, making them suitable mentors.

https://en.wikipedia.org/wiki/Liminal_being



The term cyborg applies to a living organism that has restored function or enhanced abilities due to the integration of some artificial component or technology that relies on feedback. https://en.wikipedia.org/wiki/Cyborg

CC licence: https://en.wikipedia.org/wiki/Cyborg#/media/File%3ASerengeti_Cyborg%2C_by_Fanuel_Leul.jpg

Saturday, November 1, 2025

Visual 2: Session learning short and long term goals and objectives

Session learning Goals:

Creativity

Human centred management 




Short term: brief hands on interactive exposure to descriptive models of clinical decision making in the pre AI and current AI era and challenges posed toward explainability and trust in blended learning ecosystems

More here: 




Long term: collaborate to potentially develop a special group of human centered healthcare managers trained in AI driven clinical decision making that takes into account an individual patient's internal medicine (anatomy physiology, pathology, organ systems) and external (organisational, public health, environmental exposure and community medicine) drivers to optimise their overall healthcare outcomes using tools such as creative critical realist heutagogy.

More here: 


Narketpally syn (critical realist heutagogy) : https://pubmed.ncbi.nlm.nih.gov/40674544/





Session learning objectives:

1) Demonstrate a hands on approach to AI driven clinical decision making utilising past cases as well as cases from the floor shared by the audience 

2) Discuss briefly the evolution of clinical decision making pre and post AI

3) Discuss issues around how to bridge the inferential gap between multiple stakeholders in clinical decision making such as patients, their relatives, health professionals and policy makers through AI driven explainability 

4) Discuss how to gain and maintain trust between multiple stakeholders through judicious optimization of intelligence driven explainability.




Visual 1 flipped session content for GIM, November 7

 Never doubt that a small group of thoughtful committed individuals can change the world. In fact, it's the only thing that ever has."



Counterview to above in the link below illustrates the broader meaning of cision and de-cision (expanded again further down): https://www.ohiocitizen.org/about_that_margaret_mead_quotation


Introduction 




to our team and human centered clinical decision making lab: 

Our team consists of full time educators and practitioners of clinical decision making with a special interest in developing creative skills in learners and online users while tackling clinical decision making issues utilising current AI tools and descriptive models that not only create a case based medical informatics ecosystem but also promote explainability and trust among humans.

 



Why is a global clinical decision making team called "Narketpally syndrome?"


generated from a rhetorical editorial: https://pubmed.ncbi.nlm.nih.gov/40674544/

Why Narketpally?

Well let's just say because of a high perceived incidence of clinical complexity in comorbidities due to CHO molecular toxins such as C6H12O6 and C2H5OH and a certain halide! 

More about it here: https://medicinedepartment.blogspot.com/2025/03/project-7-years-in-narketpally.html?m=1

Our team is currently working on:

Among other things such as daily clinical decision making with our patients, a book on the same topic, named "Troubleshooting Humans in the Age of AI: A Playbook for Multidisciplinary, Participatory, Medical Cognition," and we are looking for potential chapter authors and elective trainees who will also enjoy this journey of clinical decision making to trouble shoot humans in our blended learning ecosystem in the coming months.



Case based medical informatics descriptive database: 



Visual 3 What is cognition, decision and clinical? GIM November 7

Evolution of clinical decision making 

pre and post AI







What is cognition?


What is dual processing theory of cognition?



What is decision?

Word picture:

Imagine you are "Cutting a vegetable with a knife" and imagine what is the next step in your cooking once cutting is over?

Cision is the process of cutting originated from Proto-Indo-European language as *skh1-ie, *skh1-io, and that is related to the Proto-Indo-European *kh₂eyd- (to cut, hew), whence also Latin caedō (to cut, strike) in Sanskrit aka खिद्

Looks like Europeans added an s to the beginning of khid in Sanskrit aka caed in Latin and then subsequently removed the d when they used cutting as a metaphor for science!

The word science is derived from Proto-Indo-European language as *skh1-ie, *skh1-io, meaning "to incise". https://en.m.wikipedia.org/wiki/Science

So imagine some of the cutting instruments you know of and check out their names: 

Nouns: scissors, sickle

Verbs: incise, size up, cut to size 



Needless to say, not everything ending with cise means cutting such as the words "exercise" and "exorcise" apparently have no connection to cutting: https://www.etymonline.com/word/exorcise


Image with CC licence: https://commons.m.wikimedia.org/wiki/File:Sickle_and_throwing_knife_at_Manchester_Museum.jpg#mw-jump-to-license

And the image of the sickle and science is contained in an important writing tool for science! The question mark is a very important instrument of scientific scepticism:





Creative commons license: https://en.m.wikipedia.org/wiki/Question_mark#/media/File%3AQuestion_opening-closing.svg


To reach a de cision is to stop cutting and stop questioning further! As in de addiction or de escalation, which means to stop addiction or stop escalation!

In other words going with the cutting edge pictorial cooking analogy above, one simply moves to the next phase of cooking once the cutting of it's ingredients is over.

Decision etymology:

https://www.reddit.com/r/etymology/comments/2eujw6/where_did_the_root_cision_come_from_decision/

Clinical etymology:

"Clinical" comes from the Sanskrit Klinna (क्लिन्न) refers to “dripping (clothes) and in general suggests moisture as a result of which it's also used to denote putrefaction due to microorganism action.



https://www.wisdomlib.org/definition/klinna


Greeks took the word to clean their beds and for them it means "kline or  "bed, couch, that on which one lies! 


Europeans took the word to their clinics which were essentially glorified bedsides!


It's a mystery (other than the connotations of slipperiness) as to how klinna also came to mean 'lean' and became used as a suffix for words such as: incline, decline, recline and if you are to believe this link 👇


https://www.etymonline.com/word/clinical


Then even the ones below:


acclivity; anticline; clemency; client; climate; climax; cline; clinic; clinical; clino-; clitellum; clitoris; decline; declivity; enclitic; heteroclite; incline; ladder; lean (v.); lid; low (n.2) "small hill, eminence;" matroclinous; patroclinous; polyclinic; proclitic; proclivity; recline; synclinal; thermocline.


Also here:


https://ahdictionary.com/word/indoeurop.html