Introduction background (and problem statement) :
Formal healthcare education systems in Indian Medical education curriculum, tailor their assessments into two groups namely theory and practical.
Although currently, "theory" appears to be an assessment framework based on "student conceptualizations," it's design has evolved in many universities to allow students to simply memorize factual content and crack it by copy pasting the same in their answers (reference : all university data bases archiving student answer papers and the one linked here may be an outlier: https:// medicinedepartment.blogspot. com/2021/11/selected-answers- to-2017-internal.html?m=0).
Practical and viva exams (interviews), on the other hand are meant to test a student's ability to capture and analyze patient data toward clinical decision making along with demonstrable procedural competencies. While much has been written around summative practical assessments (reference : https://medicinedepartment. blogspot.com/2021/03/final- university-mbbs-medicine.html? m=0), very little exists around how formative practice assessments have to be conducted in our local learning ecosystems (problem statement).
Although, many review articles exist, developed by local eminent educationists and policy makers (reference : https://www.ijabmr.org/ article.asp?issn=2229-516X; year=2021;volume=11;issue=4; spage=206;epage=213;aulast= Saiyad;type=3) around overall assessment they are largely focused on summative quantitative assessment rather than formative qualitative assessment and to quote from the same article, "quantitative measurements provide the idea about the overall achievement of the students but give no idea about the factors affecting the performance often resembling a cross-sectional study, which didnot allow the teachers and students to learn contextually (due to lack of longitudinal information continuity)."
Method :
Getting back to how our current theory papers are structured in Indian Medical education system, it can at best claim to address the first level of Blooms taxonomy that is remembering and understanding (A candid lecture on blooms taxonomy here : https://sites.pitt.edu/~ super1/lecture/lec54091/002. htm). Also, most colleges find it easier to administer repeated monthly theory assessment papers that they call FA1, FA2, FA3 FAn etc (where FA stands for formative assessment) and in some colleges this is internal assessment so it becomes IA1, IA2, IA3, IAn etc.
We would have preferred not to mix qualitative formative assessments with quantitative summative assessments but due to the majority usage of this mixed method model (which is in reality a more frequently repeated quantitative summative assessment model masquarading as formative), we too are compelled to develop and share our method of a compromise where we try to accommodate the summative theory paper quantitation as a springboard to begin their assessment process with their prowess in tackling the first level of blooms taxonomy.
So the theory quantitation of 60 marks is divided into scores of
10-20
20-30
30-40
40-50
if a candidate achieves
10-20 s/he would in the conventional summative be declared a failure but because our formative, internal assessment is not assessment OF learning but assessment FOR learning, we try to find out if the same person has made an impact on our learning ecosystem in other ways with her inputs in the wards around her patient that reflects his her's :
Approach to disease localization (BT level 1 -3)
enthusiasm to resolving initial diagnostic uncertainty (BT level 1-4)
toward therapeutic decision making and tenacity to evaluating patient requirements and outcome (BT level 1-5)
Developing and testing innovative diagnostic and therapeutic solutions ( BT level 1-6)
On patient care outcomes :
Was empathic trust built with patient and relatives?
Were the patient requirements identified adequately and a proper problem list made toward assessment?
Was standard of care provided with provision for care continuity ?
So once we have the theory quantitation of each candidate's answer paper and find that a candidate securing 10-20 out of 60 hasn't been able to also participate in the patient centered practical learning ecosystem and is unable to touch upon any of the above listed impact criteria we may flag the student in the red zone of 10-20 and monitor his her progress closely to improve his her competence to a higher zone.
20-30 would be still an orange zone outlier
30--40 would be average and more than that would be a positive outlier
Results :
So the internal/formative assessment results could be displayed in a mixed method manner with a quantitative-qualitative zone that is numbered and color coded.
No comments:
Post a Comment