Showing 3 results for Emergency Medicine
Saeed Abbasi, Davood Farsi, Maryam Bahrani, Saeed Davari, Elham Pishbin, Nahid Kianmehr, Mahdi Rezai, Reza Yazdanpanah, Mani Mofidi,
Volume 28, Issue 1 (1-2014)
Abstract
Sedigheh Najafipour, Sara Mortaz Hejri, Alireza Nikbakht Nasrabadi, Mir Saeed Yekaninejad, Mandana Shirazi, Ali Labaf, Mohammad Jalili,
Volume 34, Issue 1 (2-2020)
Abstract
Background: A few studies have been done regarding the validity and reliability of the Mini-Peer Assessment Tool across various specialties. This study was conducted to determine the reliability, content and construct validity of Mini-Peer Assessment Tool to assess the competency of emergency medicine residents.
Methods: This study was carried out to investigate the psychometric properties of the mini-PAT tool to evaluate the professional competencies of emergency medicine residents in educational hospitals affiliated to Tehran University of Medical Sciences. The initial Mini-Peer Assessment Tool was translated into Persian. After that, the content validity index and content validity ratio determined by consulting 12 professors of emergency medicine. The construct validity was determined with exploratory factor analysis and investigation of the correlation coefficient on 31 self and 248 peer assessment cases.
The reliability of the mini peer assessment tool was determined by internal consistency and item deletion by using Cronbach’s alpha coefficient. Reliability was also assessed by determining the agreement between the two tools of self-assessment and peer assessment by using the diagram Bland and Altman.
Results: The results showed content validity ratio (CVR) of the items ranged from 0.56 to 0.83, and the content validity index (CVI) of the items ranged from 0.72 to 0.90. The reliability of the self-assessment and peer-assessment tools were 0.83 and 0.95 respectively and there was a relative agreement between the self-assessment method and the peer assessment method. Finally, the tool underwent exploratory factor analysis resulting extraction into two factors namely ‘clinical competencies’ and ‘human interactions’ in the peer assessment tool. In the self-assessment tool, the factors of ‘good practice’ and ‘technical competence’ were extracted.
Conclusion: The results of the present study provided evidence of the adequacy of content validity, reliability of the contextually customized mini-peer assessment tool in assessing the competencies of emergency medicine residents.
Helen Dargahi, Alireza Monajemi, Akbar Soltani, Hooman Hossein Nejad Nedaie, Ali Labaf,
Volume 36, Issue 1 (1-2022)
Abstract
Background: Clinical reasoning is the basis of all clinical activities in the health team, and diagnostic reasoning is perhaps the most critical of a physician's skills. Despite many advances, medical errors have not been reduced. Studies have shown that most diagnostic errors made in emergency rooms are cognitive errors, and anchoring error was identified as the most common cognitive error in clinical settings. This research intends to determine the frequency and compare the percentage of anchoring bias perceived among faculty members versus residents in the emergency medicine department.
Methods: In this quasi-experimental study, Emergency Medicine's Faculties and Residents are evaluated in clinical reasoning by nine written clinical cases. The clinical data for each clinical case was presented to the participants over three pages, based on receiving clinical and para-clinical information in real situations. At the end of each page, participants were asked to write up diagnoses. Data were analyzed using one-way ANOVA test. The SPSS software (Version 16.0) was employed to conduct statistical tests, and a P value < 0.05 was considered to be statistically significant.
Results: Seventy-seven participants of the residency program in the Emergency Medical group volunteered to participate in this study. Data showed Faculties were significantly higher in writing correct diagnoses than residents (66% vs. 41%), but the anchoring error ratio was significantly lower in residents (33% vs. 75%). In addition, the number of written diagnoses, time for writing diagnoses, and Clinical experience in faculties and residents were compared.
Conclusion: Findings showed that increasing clinical experience increased diagnostic accuracy and changed cognitive medical errors. Faculties were higher than residents in anchoring error ratio. This error could be the result of more exposure and more decision-making in the mode of heuristic or intuitive thinking in faculties.